2023-07-12 20:17:58,056 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc 2023-07-12 20:17:58,077 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-12 20:17:58,095 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 20:17:58,095 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/cluster_599e48d2-0e92-9211-4f46-ef81fbc5f05c, deleteOnExit=true 2023-07-12 20:17:58,095 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 20:17:58,096 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/test.cache.data in system properties and HBase conf 2023-07-12 20:17:58,097 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 20:17:58,097 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/hadoop.log.dir in system properties and HBase conf 2023-07-12 20:17:58,098 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 20:17:58,098 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 20:17:58,098 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 20:17:58,219 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-12 20:17:58,655 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 20:17:58,659 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 20:17:58,660 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 20:17:58,660 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 20:17:58,660 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 20:17:58,661 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 20:17:58,661 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 20:17:58,662 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 20:17:58,662 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 20:17:58,662 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 20:17:58,663 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/nfs.dump.dir in system properties and HBase conf 2023-07-12 20:17:58,663 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/java.io.tmpdir in system properties and HBase conf 2023-07-12 20:17:58,664 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 20:17:58,664 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 20:17:58,665 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 20:17:59,193 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 20:17:59,197 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 20:17:59,492 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-12 20:17:59,711 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-12 20:17:59,728 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 20:17:59,770 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 20:17:59,814 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/java.io.tmpdir/Jetty_localhost_34599_hdfs____.xroaiq/webapp 2023-07-12 20:17:59,983 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34599 2023-07-12 20:17:59,998 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 20:17:59,998 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 20:18:00,464 WARN [Listener at localhost/41485] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 20:18:00,539 WARN [Listener at localhost/41485] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 20:18:00,557 WARN [Listener at localhost/41485] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 20:18:00,563 INFO [Listener at localhost/41485] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 20:18:00,571 INFO [Listener at localhost/41485] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/java.io.tmpdir/Jetty_localhost_37475_datanode____jmj5zp/webapp 2023-07-12 20:18:00,677 INFO [Listener at localhost/41485] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37475 2023-07-12 20:18:01,125 WARN [Listener at localhost/36355] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 20:18:01,180 WARN [Listener at localhost/36355] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 20:18:01,184 WARN [Listener at localhost/36355] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 20:18:01,187 INFO [Listener at localhost/36355] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 20:18:01,195 INFO [Listener at localhost/36355] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/java.io.tmpdir/Jetty_localhost_36157_datanode____vbk01j/webapp 2023-07-12 20:18:01,323 INFO [Listener at localhost/36355] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36157 2023-07-12 20:18:01,342 WARN [Listener at localhost/46499] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 20:18:01,396 WARN [Listener at localhost/46499] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 20:18:01,402 WARN [Listener at localhost/46499] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 20:18:01,404 INFO [Listener at localhost/46499] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 20:18:01,411 INFO [Listener at localhost/46499] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/java.io.tmpdir/Jetty_localhost_36957_datanode____hoodhb/webapp 2023-07-12 20:18:01,583 INFO [Listener at localhost/46499] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36957 2023-07-12 20:18:01,634 WARN [Listener at localhost/36071] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 20:18:01,860 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x35433716664a5102: Processing first storage report for DS-1cb4aa6a-03af-489e-bae3-838444f77a47 from datanode e57aebba-cd55-4500-9ed9-ba03d666544d 2023-07-12 20:18:01,862 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x35433716664a5102: from storage DS-1cb4aa6a-03af-489e-bae3-838444f77a47 node DatanodeRegistration(127.0.0.1:39263, datanodeUuid=e57aebba-cd55-4500-9ed9-ba03d666544d, infoPort=37827, infoSecurePort=0, ipcPort=36071, storageInfo=lv=-57;cid=testClusterID;nsid=78042957;c=1689193079268), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 20:18:01,862 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x231c5cd2312418ef: Processing first storage report for DS-27017766-40ca-43a4-88da-0658c7086ccb from datanode 5deb8b17-48b3-4e56-9487-507fe6d85b8d 2023-07-12 20:18:01,862 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x231c5cd2312418ef: from storage DS-27017766-40ca-43a4-88da-0658c7086ccb node DatanodeRegistration(127.0.0.1:46089, datanodeUuid=5deb8b17-48b3-4e56-9487-507fe6d85b8d, infoPort=35333, infoSecurePort=0, ipcPort=46499, storageInfo=lv=-57;cid=testClusterID;nsid=78042957;c=1689193079268), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 20:18:01,862 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc2c3b6f40a886397: Processing first storage report for DS-8b674e46-cd2d-485c-80dd-f03b5eb4b7a0 from datanode bcc5c7f8-f2ab-463d-a9ca-1fbcbb6b1d3f 2023-07-12 20:18:01,862 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc2c3b6f40a886397: from storage DS-8b674e46-cd2d-485c-80dd-f03b5eb4b7a0 node DatanodeRegistration(127.0.0.1:38053, datanodeUuid=bcc5c7f8-f2ab-463d-a9ca-1fbcbb6b1d3f, infoPort=42325, infoSecurePort=0, ipcPort=36355, storageInfo=lv=-57;cid=testClusterID;nsid=78042957;c=1689193079268), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 20:18:01,862 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x35433716664a5102: Processing first storage report for DS-df4cdac5-f42f-4456-add0-e2d9c062628c from datanode e57aebba-cd55-4500-9ed9-ba03d666544d 2023-07-12 20:18:01,862 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x35433716664a5102: from storage DS-df4cdac5-f42f-4456-add0-e2d9c062628c node DatanodeRegistration(127.0.0.1:39263, datanodeUuid=e57aebba-cd55-4500-9ed9-ba03d666544d, infoPort=37827, infoSecurePort=0, ipcPort=36071, storageInfo=lv=-57;cid=testClusterID;nsid=78042957;c=1689193079268), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 20:18:01,863 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x231c5cd2312418ef: Processing first storage report for DS-e40630a3-46fe-4bc3-bd9e-95dace102842 from datanode 5deb8b17-48b3-4e56-9487-507fe6d85b8d 2023-07-12 20:18:01,863 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x231c5cd2312418ef: from storage DS-e40630a3-46fe-4bc3-bd9e-95dace102842 node DatanodeRegistration(127.0.0.1:46089, datanodeUuid=5deb8b17-48b3-4e56-9487-507fe6d85b8d, infoPort=35333, infoSecurePort=0, ipcPort=46499, storageInfo=lv=-57;cid=testClusterID;nsid=78042957;c=1689193079268), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 20:18:01,863 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc2c3b6f40a886397: Processing first storage report for DS-b19b81ed-617d-48ca-ac2e-e8bdeba0ace7 from datanode bcc5c7f8-f2ab-463d-a9ca-1fbcbb6b1d3f 2023-07-12 20:18:01,863 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc2c3b6f40a886397: from storage DS-b19b81ed-617d-48ca-ac2e-e8bdeba0ace7 node DatanodeRegistration(127.0.0.1:38053, datanodeUuid=bcc5c7f8-f2ab-463d-a9ca-1fbcbb6b1d3f, infoPort=42325, infoSecurePort=0, ipcPort=36355, storageInfo=lv=-57;cid=testClusterID;nsid=78042957;c=1689193079268), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 20:18:02,075 DEBUG [Listener at localhost/36071] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc 2023-07-12 20:18:02,175 INFO [Listener at localhost/36071] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/cluster_599e48d2-0e92-9211-4f46-ef81fbc5f05c/zookeeper_0, clientPort=51228, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/cluster_599e48d2-0e92-9211-4f46-ef81fbc5f05c/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/cluster_599e48d2-0e92-9211-4f46-ef81fbc5f05c/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 20:18:02,194 INFO [Listener at localhost/36071] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51228 2023-07-12 20:18:02,210 INFO [Listener at localhost/36071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:02,212 INFO [Listener at localhost/36071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:02,925 INFO [Listener at localhost/36071] util.FSUtils(471): Created version file at hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6 with version=8 2023-07-12 20:18:02,925 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/hbase-staging 2023-07-12 20:18:02,935 DEBUG [Listener at localhost/36071] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 20:18:02,935 DEBUG [Listener at localhost/36071] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 20:18:02,935 DEBUG [Listener at localhost/36071] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 20:18:02,935 DEBUG [Listener at localhost/36071] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 20:18:03,331 INFO [Listener at localhost/36071] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-12 20:18:03,875 INFO [Listener at localhost/36071] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 20:18:03,918 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:03,919 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:03,919 INFO [Listener at localhost/36071] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 20:18:03,920 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:03,920 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 20:18:04,091 INFO [Listener at localhost/36071] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 20:18:04,174 DEBUG [Listener at localhost/36071] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-12 20:18:04,275 INFO [Listener at localhost/36071] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42533 2023-07-12 20:18:04,286 INFO [Listener at localhost/36071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:04,289 INFO [Listener at localhost/36071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:04,316 INFO [Listener at localhost/36071] zookeeper.RecoverableZooKeeper(93): Process identifier=master:42533 connecting to ZooKeeper ensemble=127.0.0.1:51228 2023-07-12 20:18:04,367 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:425330x0, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 20:18:04,383 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:42533-0x1015b2f70320000 connected 2023-07-12 20:18:04,415 DEBUG [Listener at localhost/36071] zookeeper.ZKUtil(164): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 20:18:04,416 DEBUG [Listener at localhost/36071] zookeeper.ZKUtil(164): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:04,421 DEBUG [Listener at localhost/36071] zookeeper.ZKUtil(164): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 20:18:04,433 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42533 2023-07-12 20:18:04,433 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42533 2023-07-12 20:18:04,434 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42533 2023-07-12 20:18:04,435 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42533 2023-07-12 20:18:04,435 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42533 2023-07-12 20:18:04,471 INFO [Listener at localhost/36071] log.Log(170): Logging initialized @7210ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-12 20:18:04,613 INFO [Listener at localhost/36071] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 20:18:04,614 INFO [Listener at localhost/36071] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 20:18:04,615 INFO [Listener at localhost/36071] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 20:18:04,617 INFO [Listener at localhost/36071] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 20:18:04,617 INFO [Listener at localhost/36071] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 20:18:04,617 INFO [Listener at localhost/36071] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 20:18:04,622 INFO [Listener at localhost/36071] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 20:18:04,684 INFO [Listener at localhost/36071] http.HttpServer(1146): Jetty bound to port 46167 2023-07-12 20:18:04,686 INFO [Listener at localhost/36071] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 20:18:04,723 INFO [Listener at localhost/36071] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:04,726 INFO [Listener at localhost/36071] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@48ee05fa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/hadoop.log.dir/,AVAILABLE} 2023-07-12 20:18:04,726 INFO [Listener at localhost/36071] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:04,727 INFO [Listener at localhost/36071] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4ff95bf2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 20:18:04,907 INFO [Listener at localhost/36071] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 20:18:04,919 INFO [Listener at localhost/36071] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 20:18:04,919 INFO [Listener at localhost/36071] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 20:18:04,921 INFO [Listener at localhost/36071] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 20:18:04,928 INFO [Listener at localhost/36071] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:04,953 INFO [Listener at localhost/36071] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5f9ed0a6{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/java.io.tmpdir/jetty-0_0_0_0-46167-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6465279011565431609/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 20:18:04,964 INFO [Listener at localhost/36071] server.AbstractConnector(333): Started ServerConnector@7d3cded5{HTTP/1.1, (http/1.1)}{0.0.0.0:46167} 2023-07-12 20:18:04,965 INFO [Listener at localhost/36071] server.Server(415): Started @7704ms 2023-07-12 20:18:04,968 INFO [Listener at localhost/36071] master.HMaster(444): hbase.rootdir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6, hbase.cluster.distributed=false 2023-07-12 20:18:05,045 INFO [Listener at localhost/36071] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 20:18:05,045 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:05,045 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:05,046 INFO [Listener at localhost/36071] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 20:18:05,046 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:05,046 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 20:18:05,051 INFO [Listener at localhost/36071] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 20:18:05,054 INFO [Listener at localhost/36071] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41567 2023-07-12 20:18:05,057 INFO [Listener at localhost/36071] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 20:18:05,064 DEBUG [Listener at localhost/36071] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 20:18:05,065 INFO [Listener at localhost/36071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:05,066 INFO [Listener at localhost/36071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:05,068 INFO [Listener at localhost/36071] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41567 connecting to ZooKeeper ensemble=127.0.0.1:51228 2023-07-12 20:18:05,073 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:415670x0, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 20:18:05,074 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41567-0x1015b2f70320001 connected 2023-07-12 20:18:05,074 DEBUG [Listener at localhost/36071] zookeeper.ZKUtil(164): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 20:18:05,076 DEBUG [Listener at localhost/36071] zookeeper.ZKUtil(164): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:05,077 DEBUG [Listener at localhost/36071] zookeeper.ZKUtil(164): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 20:18:05,077 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41567 2023-07-12 20:18:05,077 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41567 2023-07-12 20:18:05,078 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41567 2023-07-12 20:18:05,078 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41567 2023-07-12 20:18:05,079 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41567 2023-07-12 20:18:05,081 INFO [Listener at localhost/36071] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 20:18:05,081 INFO [Listener at localhost/36071] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 20:18:05,081 INFO [Listener at localhost/36071] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 20:18:05,082 INFO [Listener at localhost/36071] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 20:18:05,082 INFO [Listener at localhost/36071] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 20:18:05,082 INFO [Listener at localhost/36071] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 20:18:05,083 INFO [Listener at localhost/36071] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 20:18:05,084 INFO [Listener at localhost/36071] http.HttpServer(1146): Jetty bound to port 34845 2023-07-12 20:18:05,084 INFO [Listener at localhost/36071] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 20:18:05,086 INFO [Listener at localhost/36071] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:05,086 INFO [Listener at localhost/36071] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3751ed89{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/hadoop.log.dir/,AVAILABLE} 2023-07-12 20:18:05,087 INFO [Listener at localhost/36071] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:05,087 INFO [Listener at localhost/36071] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3fd26991{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 20:18:05,208 INFO [Listener at localhost/36071] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 20:18:05,210 INFO [Listener at localhost/36071] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 20:18:05,210 INFO [Listener at localhost/36071] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 20:18:05,210 INFO [Listener at localhost/36071] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 20:18:05,211 INFO [Listener at localhost/36071] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:05,215 INFO [Listener at localhost/36071] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@609673c2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/java.io.tmpdir/jetty-0_0_0_0-34845-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4817165305236621331/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:05,216 INFO [Listener at localhost/36071] server.AbstractConnector(333): Started ServerConnector@79947ad{HTTP/1.1, (http/1.1)}{0.0.0.0:34845} 2023-07-12 20:18:05,217 INFO [Listener at localhost/36071] server.Server(415): Started @7956ms 2023-07-12 20:18:05,233 INFO [Listener at localhost/36071] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 20:18:05,233 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:05,233 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:05,234 INFO [Listener at localhost/36071] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 20:18:05,234 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:05,234 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 20:18:05,235 INFO [Listener at localhost/36071] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 20:18:05,236 INFO [Listener at localhost/36071] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39187 2023-07-12 20:18:05,237 INFO [Listener at localhost/36071] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 20:18:05,238 DEBUG [Listener at localhost/36071] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 20:18:05,239 INFO [Listener at localhost/36071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:05,241 INFO [Listener at localhost/36071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:05,243 INFO [Listener at localhost/36071] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39187 connecting to ZooKeeper ensemble=127.0.0.1:51228 2023-07-12 20:18:05,247 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:391870x0, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 20:18:05,249 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39187-0x1015b2f70320002 connected 2023-07-12 20:18:05,249 DEBUG [Listener at localhost/36071] zookeeper.ZKUtil(164): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 20:18:05,250 DEBUG [Listener at localhost/36071] zookeeper.ZKUtil(164): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:05,250 DEBUG [Listener at localhost/36071] zookeeper.ZKUtil(164): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 20:18:05,253 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39187 2023-07-12 20:18:05,253 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39187 2023-07-12 20:18:05,254 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39187 2023-07-12 20:18:05,254 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39187 2023-07-12 20:18:05,255 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39187 2023-07-12 20:18:05,258 INFO [Listener at localhost/36071] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 20:18:05,258 INFO [Listener at localhost/36071] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 20:18:05,258 INFO [Listener at localhost/36071] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 20:18:05,259 INFO [Listener at localhost/36071] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 20:18:05,259 INFO [Listener at localhost/36071] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 20:18:05,259 INFO [Listener at localhost/36071] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 20:18:05,260 INFO [Listener at localhost/36071] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 20:18:05,260 INFO [Listener at localhost/36071] http.HttpServer(1146): Jetty bound to port 41971 2023-07-12 20:18:05,261 INFO [Listener at localhost/36071] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 20:18:05,271 INFO [Listener at localhost/36071] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:05,272 INFO [Listener at localhost/36071] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6e075be0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/hadoop.log.dir/,AVAILABLE} 2023-07-12 20:18:05,272 INFO [Listener at localhost/36071] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:05,273 INFO [Listener at localhost/36071] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6221cb1e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 20:18:05,407 INFO [Listener at localhost/36071] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 20:18:05,408 INFO [Listener at localhost/36071] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 20:18:05,408 INFO [Listener at localhost/36071] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 20:18:05,409 INFO [Listener at localhost/36071] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 20:18:05,410 INFO [Listener at localhost/36071] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:05,411 INFO [Listener at localhost/36071] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@10e2164e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/java.io.tmpdir/jetty-0_0_0_0-41971-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2417344065836515718/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:05,412 INFO [Listener at localhost/36071] server.AbstractConnector(333): Started ServerConnector@41fa53df{HTTP/1.1, (http/1.1)}{0.0.0.0:41971} 2023-07-12 20:18:05,412 INFO [Listener at localhost/36071] server.Server(415): Started @8152ms 2023-07-12 20:18:05,424 INFO [Listener at localhost/36071] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 20:18:05,424 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:05,425 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:05,425 INFO [Listener at localhost/36071] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 20:18:05,425 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:05,425 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 20:18:05,425 INFO [Listener at localhost/36071] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 20:18:05,427 INFO [Listener at localhost/36071] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46283 2023-07-12 20:18:05,428 INFO [Listener at localhost/36071] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 20:18:05,429 DEBUG [Listener at localhost/36071] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 20:18:05,431 INFO [Listener at localhost/36071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:05,433 INFO [Listener at localhost/36071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:05,435 INFO [Listener at localhost/36071] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46283 connecting to ZooKeeper ensemble=127.0.0.1:51228 2023-07-12 20:18:05,440 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:462830x0, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 20:18:05,441 DEBUG [Listener at localhost/36071] zookeeper.ZKUtil(164): regionserver:462830x0, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 20:18:05,442 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46283-0x1015b2f70320003 connected 2023-07-12 20:18:05,443 DEBUG [Listener at localhost/36071] zookeeper.ZKUtil(164): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:05,444 DEBUG [Listener at localhost/36071] zookeeper.ZKUtil(164): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 20:18:05,446 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46283 2023-07-12 20:18:05,450 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46283 2023-07-12 20:18:05,454 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46283 2023-07-12 20:18:05,458 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46283 2023-07-12 20:18:05,459 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46283 2023-07-12 20:18:05,462 INFO [Listener at localhost/36071] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 20:18:05,462 INFO [Listener at localhost/36071] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 20:18:05,463 INFO [Listener at localhost/36071] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 20:18:05,464 INFO [Listener at localhost/36071] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 20:18:05,464 INFO [Listener at localhost/36071] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 20:18:05,464 INFO [Listener at localhost/36071] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 20:18:05,464 INFO [Listener at localhost/36071] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 20:18:05,465 INFO [Listener at localhost/36071] http.HttpServer(1146): Jetty bound to port 35987 2023-07-12 20:18:05,465 INFO [Listener at localhost/36071] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 20:18:05,467 INFO [Listener at localhost/36071] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:05,467 INFO [Listener at localhost/36071] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3b4c0447{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/hadoop.log.dir/,AVAILABLE} 2023-07-12 20:18:05,468 INFO [Listener at localhost/36071] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:05,468 INFO [Listener at localhost/36071] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@66807c4c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 20:18:05,607 INFO [Listener at localhost/36071] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 20:18:05,608 INFO [Listener at localhost/36071] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 20:18:05,609 INFO [Listener at localhost/36071] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 20:18:05,609 INFO [Listener at localhost/36071] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 20:18:05,610 INFO [Listener at localhost/36071] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:05,612 INFO [Listener at localhost/36071] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@42be2682{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/java.io.tmpdir/jetty-0_0_0_0-35987-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5462396909590997080/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:05,613 INFO [Listener at localhost/36071] server.AbstractConnector(333): Started ServerConnector@68f85add{HTTP/1.1, (http/1.1)}{0.0.0.0:35987} 2023-07-12 20:18:05,613 INFO [Listener at localhost/36071] server.Server(415): Started @8352ms 2023-07-12 20:18:05,623 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 20:18:05,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@6e75c809{HTTP/1.1, (http/1.1)}{0.0.0.0:43111} 2023-07-12 20:18:05,629 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8368ms 2023-07-12 20:18:05,629 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,42533,1689193083113 2023-07-12 20:18:05,643 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 20:18:05,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,42533,1689193083113 2023-07-12 20:18:05,667 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 20:18:05,667 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 20:18:05,667 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 20:18:05,667 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:05,667 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 20:18:05,670 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 20:18:05,671 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,42533,1689193083113 from backup master directory 2023-07-12 20:18:05,672 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 20:18:05,675 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,42533,1689193083113 2023-07-12 20:18:05,676 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 20:18:05,676 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 20:18:05,677 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,42533,1689193083113 2023-07-12 20:18:05,680 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-12 20:18:05,683 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-12 20:18:05,803 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/hbase.id with ID: 66677ade-6bf9-45c3-bb71-6001d75a9e7b 2023-07-12 20:18:05,853 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:05,873 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:05,928 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0fb2cca5 to 127.0.0.1:51228 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:05,955 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5a38f91b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:05,982 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:05,984 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 20:18:06,006 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-12 20:18:06,007 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-12 20:18:06,009 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-12 20:18:06,014 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-12 20:18:06,015 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 20:18:06,061 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/MasterData/data/master/store-tmp 2023-07-12 20:18:06,111 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:06,112 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 20:18:06,112 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:06,112 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:06,112 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 20:18:06,112 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:06,112 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:06,113 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 20:18:06,114 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/MasterData/WALs/jenkins-hbase4.apache.org,42533,1689193083113 2023-07-12 20:18:06,143 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42533%2C1689193083113, suffix=, logDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/MasterData/WALs/jenkins-hbase4.apache.org,42533,1689193083113, archiveDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/MasterData/oldWALs, maxLogs=10 2023-07-12 20:18:06,208 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38053,DS-8b674e46-cd2d-485c-80dd-f03b5eb4b7a0,DISK] 2023-07-12 20:18:06,208 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39263,DS-1cb4aa6a-03af-489e-bae3-838444f77a47,DISK] 2023-07-12 20:18:06,208 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46089,DS-27017766-40ca-43a4-88da-0658c7086ccb,DISK] 2023-07-12 20:18:06,220 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 20:18:06,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/MasterData/WALs/jenkins-hbase4.apache.org,42533,1689193083113/jenkins-hbase4.apache.org%2C42533%2C1689193083113.1689193086157 2023-07-12 20:18:06,292 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46089,DS-27017766-40ca-43a4-88da-0658c7086ccb,DISK], DatanodeInfoWithStorage[127.0.0.1:39263,DS-1cb4aa6a-03af-489e-bae3-838444f77a47,DISK], DatanodeInfoWithStorage[127.0.0.1:38053,DS-8b674e46-cd2d-485c-80dd-f03b5eb4b7a0,DISK]] 2023-07-12 20:18:06,293 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:06,293 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:06,298 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:06,299 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:06,377 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:06,384 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 20:18:06,415 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 20:18:06,430 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:06,437 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:06,439 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:06,454 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:06,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:06,459 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10631024000, jitterRate=-0.009908735752105713}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:06,459 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 20:18:06,460 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 20:18:06,484 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 20:18:06,484 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 20:18:06,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 20:18:06,489 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-12 20:18:06,531 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 40 msec 2023-07-12 20:18:06,531 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 20:18:06,561 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 20:18:06,568 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 20:18:06,577 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 20:18:06,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 20:18:06,588 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 20:18:06,592 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:06,594 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 20:18:06,594 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 20:18:06,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 20:18:06,614 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:06,614 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:06,614 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:06,614 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:06,614 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:06,615 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,42533,1689193083113, sessionid=0x1015b2f70320000, setting cluster-up flag (Was=false) 2023-07-12 20:18:06,633 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:06,641 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 20:18:06,642 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42533,1689193083113 2023-07-12 20:18:06,649 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:06,654 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 20:18:06,655 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42533,1689193083113 2023-07-12 20:18:06,658 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.hbase-snapshot/.tmp 2023-07-12 20:18:06,719 INFO [RS:1;jenkins-hbase4:39187] regionserver.HRegionServer(951): ClusterId : 66677ade-6bf9-45c3-bb71-6001d75a9e7b 2023-07-12 20:18:06,720 INFO [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(951): ClusterId : 66677ade-6bf9-45c3-bb71-6001d75a9e7b 2023-07-12 20:18:06,719 INFO [RS:0;jenkins-hbase4:41567] regionserver.HRegionServer(951): ClusterId : 66677ade-6bf9-45c3-bb71-6001d75a9e7b 2023-07-12 20:18:06,729 DEBUG [RS:2;jenkins-hbase4:46283] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 20:18:06,729 DEBUG [RS:0;jenkins-hbase4:41567] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 20:18:06,729 DEBUG [RS:1;jenkins-hbase4:39187] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 20:18:06,738 DEBUG [RS:1;jenkins-hbase4:39187] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 20:18:06,738 DEBUG [RS:0;jenkins-hbase4:41567] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 20:18:06,738 DEBUG [RS:1;jenkins-hbase4:39187] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 20:18:06,738 DEBUG [RS:0;jenkins-hbase4:41567] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 20:18:06,746 DEBUG [RS:2;jenkins-hbase4:46283] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 20:18:06,746 DEBUG [RS:2;jenkins-hbase4:46283] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 20:18:06,750 DEBUG [RS:2;jenkins-hbase4:46283] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 20:18:06,750 DEBUG [RS:1;jenkins-hbase4:39187] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 20:18:06,756 DEBUG [RS:0;jenkins-hbase4:41567] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 20:18:06,770 DEBUG [RS:2;jenkins-hbase4:46283] zookeeper.ReadOnlyZKClient(139): Connect 0x06403b8b to 127.0.0.1:51228 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:06,772 DEBUG [RS:0;jenkins-hbase4:41567] zookeeper.ReadOnlyZKClient(139): Connect 0x6bbc0914 to 127.0.0.1:51228 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:06,773 DEBUG [RS:1;jenkins-hbase4:39187] zookeeper.ReadOnlyZKClient(139): Connect 0x407aac69 to 127.0.0.1:51228 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:06,809 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 20:18:06,820 DEBUG [RS:0;jenkins-hbase4:41567] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1cd9945d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:06,824 DEBUG [RS:0;jenkins-hbase4:41567] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@13b1bbf9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 20:18:06,824 DEBUG [RS:2;jenkins-hbase4:46283] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@49932b54, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:06,824 DEBUG [RS:2;jenkins-hbase4:46283] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7b924ee1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 20:18:06,832 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 20:18:06,833 DEBUG [RS:1;jenkins-hbase4:39187] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4d2af1ec, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:06,833 DEBUG [RS:1;jenkins-hbase4:39187] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@708331e6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 20:18:06,837 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 20:18:06,838 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42533,1689193083113] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 20:18:06,838 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 20:18:06,862 DEBUG [RS:1;jenkins-hbase4:39187] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:39187 2023-07-12 20:18:06,863 DEBUG [RS:0;jenkins-hbase4:41567] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:41567 2023-07-12 20:18:06,871 INFO [RS:1;jenkins-hbase4:39187] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 20:18:06,872 INFO [RS:1;jenkins-hbase4:39187] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 20:18:06,872 DEBUG [RS:1;jenkins-hbase4:39187] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 20:18:06,871 INFO [RS:0;jenkins-hbase4:41567] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 20:18:06,872 INFO [RS:0;jenkins-hbase4:41567] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 20:18:06,872 DEBUG [RS:0;jenkins-hbase4:41567] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 20:18:06,873 DEBUG [RS:2;jenkins-hbase4:46283] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:46283 2023-07-12 20:18:06,873 INFO [RS:2;jenkins-hbase4:46283] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 20:18:06,873 INFO [RS:2;jenkins-hbase4:46283] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 20:18:06,874 DEBUG [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 20:18:06,877 INFO [RS:1;jenkins-hbase4:39187] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42533,1689193083113 with isa=jenkins-hbase4.apache.org/172.31.14.131:39187, startcode=1689193085232 2023-07-12 20:18:06,877 INFO [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42533,1689193083113 with isa=jenkins-hbase4.apache.org/172.31.14.131:46283, startcode=1689193085424 2023-07-12 20:18:06,879 INFO [RS:0;jenkins-hbase4:41567] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42533,1689193083113 with isa=jenkins-hbase4.apache.org/172.31.14.131:41567, startcode=1689193085044 2023-07-12 20:18:06,904 DEBUG [RS:0;jenkins-hbase4:41567] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 20:18:06,904 DEBUG [RS:1;jenkins-hbase4:39187] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 20:18:06,904 DEBUG [RS:2;jenkins-hbase4:46283] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 20:18:06,986 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 20:18:07,004 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45991, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 20:18:07,006 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40529, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 20:18:07,006 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48217, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 20:18:07,019 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:07,035 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:07,037 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:07,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 20:18:07,051 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 20:18:07,052 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 20:18:07,052 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 20:18:07,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 20:18:07,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 20:18:07,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 20:18:07,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 20:18:07,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-12 20:18:07,055 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,055 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 20:18:07,055 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,061 DEBUG [RS:0;jenkins-hbase4:41567] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 20:18:07,061 DEBUG [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 20:18:07,061 DEBUG [RS:1;jenkins-hbase4:39187] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 20:18:07,062 WARN [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 20:18:07,062 WARN [RS:1;jenkins-hbase4:39187] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 20:18:07,062 WARN [RS:0;jenkins-hbase4:41567] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 20:18:07,063 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689193117063 2023-07-12 20:18:07,066 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 20:18:07,072 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 20:18:07,072 DEBUG [PEWorker-2] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 20:18:07,073 INFO [PEWorker-2] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 20:18:07,076 INFO [PEWorker-2] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:07,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 20:18:07,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 20:18:07,084 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 20:18:07,084 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 20:18:07,085 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,087 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 20:18:07,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 20:18:07,090 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 20:18:07,095 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 20:18:07,096 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 20:18:07,098 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689193087098,5,FailOnTimeoutGroup] 2023-07-12 20:18:07,101 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689193087098,5,FailOnTimeoutGroup] 2023-07-12 20:18:07,101 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,101 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 20:18:07,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,155 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:07,157 INFO [PEWorker-2] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:07,157 INFO [PEWorker-2] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6 2023-07-12 20:18:07,163 INFO [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42533,1689193083113 with isa=jenkins-hbase4.apache.org/172.31.14.131:46283, startcode=1689193085424 2023-07-12 20:18:07,164 INFO [RS:0;jenkins-hbase4:41567] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42533,1689193083113 with isa=jenkins-hbase4.apache.org/172.31.14.131:41567, startcode=1689193085044 2023-07-12 20:18:07,164 INFO [RS:1;jenkins-hbase4:39187] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42533,1689193083113 with isa=jenkins-hbase4.apache.org/172.31.14.131:39187, startcode=1689193085232 2023-07-12 20:18:07,171 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42533] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:07,173 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42533,1689193083113] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 20:18:07,174 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42533,1689193083113] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 20:18:07,185 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42533] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:07,185 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42533,1689193083113] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 20:18:07,185 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42533,1689193083113] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 20:18:07,186 DEBUG [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6 2023-07-12 20:18:07,187 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42533] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:07,187 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42533,1689193083113] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 20:18:07,187 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42533,1689193083113] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 20:18:07,187 DEBUG [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41485 2023-07-12 20:18:07,188 DEBUG [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46167 2023-07-12 20:18:07,188 DEBUG [RS:1;jenkins-hbase4:39187] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6 2023-07-12 20:18:07,188 DEBUG [RS:1;jenkins-hbase4:39187] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41485 2023-07-12 20:18:07,189 DEBUG [RS:1;jenkins-hbase4:39187] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46167 2023-07-12 20:18:07,189 DEBUG [RS:0;jenkins-hbase4:41567] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6 2023-07-12 20:18:07,189 DEBUG [RS:0;jenkins-hbase4:41567] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41485 2023-07-12 20:18:07,189 DEBUG [RS:0;jenkins-hbase4:41567] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46167 2023-07-12 20:18:07,191 DEBUG [PEWorker-2] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:07,196 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 20:18:07,199 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:07,199 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/info 2023-07-12 20:18:07,200 DEBUG [RS:1;jenkins-hbase4:39187] zookeeper.ZKUtil(162): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:07,200 DEBUG [RS:0;jenkins-hbase4:41567] zookeeper.ZKUtil(162): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:07,200 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 20:18:07,200 DEBUG [RS:2;jenkins-hbase4:46283] zookeeper.ZKUtil(162): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:07,200 WARN [RS:1;jenkins-hbase4:39187] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 20:18:07,201 WARN [RS:2;jenkins-hbase4:46283] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 20:18:07,206 INFO [RS:1;jenkins-hbase4:39187] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 20:18:07,207 DEBUG [RS:1;jenkins-hbase4:39187] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/WALs/jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:07,200 WARN [RS:0;jenkins-hbase4:41567] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 20:18:07,208 INFO [RS:0;jenkins-hbase4:41567] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 20:18:07,208 DEBUG [RS:0;jenkins-hbase4:41567] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/WALs/jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:07,201 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:07,206 INFO [RS:2;jenkins-hbase4:46283] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 20:18:07,208 DEBUG [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/WALs/jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:07,209 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 20:18:07,209 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41567,1689193085044] 2023-07-12 20:18:07,209 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39187,1689193085232] 2023-07-12 20:18:07,209 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46283,1689193085424] 2023-07-12 20:18:07,216 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/rep_barrier 2023-07-12 20:18:07,217 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 20:18:07,219 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:07,220 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 20:18:07,230 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/table 2023-07-12 20:18:07,231 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 20:18:07,232 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:07,234 DEBUG [PEWorker-2] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740 2023-07-12 20:18:07,236 DEBUG [PEWorker-2] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740 2023-07-12 20:18:07,236 DEBUG [RS:1;jenkins-hbase4:39187] zookeeper.ZKUtil(162): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:07,237 DEBUG [RS:1;jenkins-hbase4:39187] zookeeper.ZKUtil(162): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:07,238 DEBUG [RS:0;jenkins-hbase4:41567] zookeeper.ZKUtil(162): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:07,238 DEBUG [RS:2;jenkins-hbase4:46283] zookeeper.ZKUtil(162): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:07,238 DEBUG [RS:1;jenkins-hbase4:39187] zookeeper.ZKUtil(162): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:07,239 DEBUG [RS:2;jenkins-hbase4:46283] zookeeper.ZKUtil(162): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:07,240 DEBUG [RS:0;jenkins-hbase4:41567] zookeeper.ZKUtil(162): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:07,242 DEBUG [PEWorker-2] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 20:18:07,243 DEBUG [RS:0;jenkins-hbase4:41567] zookeeper.ZKUtil(162): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:07,243 DEBUG [RS:2;jenkins-hbase4:46283] zookeeper.ZKUtil(162): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:07,246 DEBUG [PEWorker-2] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 20:18:07,256 DEBUG [RS:0;jenkins-hbase4:41567] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 20:18:07,259 DEBUG [PEWorker-2] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:07,260 INFO [PEWorker-2] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10683640160, jitterRate=-0.005008473992347717}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 20:18:07,260 DEBUG [PEWorker-2] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 20:18:07,260 DEBUG [PEWorker-2] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 20:18:07,260 INFO [PEWorker-2] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 20:18:07,260 DEBUG [PEWorker-2] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 20:18:07,260 DEBUG [PEWorker-2] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 20:18:07,260 DEBUG [PEWorker-2] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 20:18:07,256 DEBUG [RS:1;jenkins-hbase4:39187] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 20:18:07,256 DEBUG [RS:2;jenkins-hbase4:46283] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 20:18:07,265 INFO [PEWorker-2] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 20:18:07,266 DEBUG [PEWorker-2] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 20:18:07,272 INFO [RS:2;jenkins-hbase4:46283] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 20:18:07,273 DEBUG [PEWorker-2] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 20:18:07,273 INFO [PEWorker-2] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 20:18:07,278 INFO [RS:1;jenkins-hbase4:39187] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 20:18:07,277 INFO [RS:0;jenkins-hbase4:41567] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 20:18:07,288 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 20:18:07,307 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 20:18:07,311 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 20:18:07,316 INFO [RS:1;jenkins-hbase4:39187] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 20:18:07,316 INFO [RS:0;jenkins-hbase4:41567] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 20:18:07,316 INFO [RS:2;jenkins-hbase4:46283] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 20:18:07,328 INFO [RS:0;jenkins-hbase4:41567] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 20:18:07,329 INFO [RS:1;jenkins-hbase4:39187] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 20:18:07,328 INFO [RS:2;jenkins-hbase4:46283] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 20:18:07,329 INFO [RS:0;jenkins-hbase4:41567] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,329 INFO [RS:2;jenkins-hbase4:46283] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,329 INFO [RS:1;jenkins-hbase4:39187] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,339 INFO [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 20:18:07,339 INFO [RS:0;jenkins-hbase4:41567] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 20:18:07,339 INFO [RS:1;jenkins-hbase4:39187] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 20:18:07,350 INFO [RS:0;jenkins-hbase4:41567] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,350 INFO [RS:1;jenkins-hbase4:39187] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,350 DEBUG [RS:0;jenkins-hbase4:41567] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,352 DEBUG [RS:0;jenkins-hbase4:41567] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,352 INFO [RS:2;jenkins-hbase4:46283] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,352 DEBUG [RS:0;jenkins-hbase4:41567] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,352 DEBUG [RS:2;jenkins-hbase4:46283] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,350 DEBUG [RS:1;jenkins-hbase4:39187] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,352 DEBUG [RS:2;jenkins-hbase4:46283] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,352 DEBUG [RS:1;jenkins-hbase4:39187] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,353 DEBUG [RS:2;jenkins-hbase4:46283] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,352 DEBUG [RS:0;jenkins-hbase4:41567] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,353 DEBUG [RS:2;jenkins-hbase4:46283] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,353 DEBUG [RS:1;jenkins-hbase4:39187] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,353 DEBUG [RS:2;jenkins-hbase4:46283] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,353 DEBUG [RS:0;jenkins-hbase4:41567] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,353 DEBUG [RS:2;jenkins-hbase4:46283] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 20:18:07,353 DEBUG [RS:1;jenkins-hbase4:39187] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,353 DEBUG [RS:2;jenkins-hbase4:46283] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,353 DEBUG [RS:0;jenkins-hbase4:41567] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 20:18:07,353 DEBUG [RS:2;jenkins-hbase4:46283] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,353 DEBUG [RS:1;jenkins-hbase4:39187] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,354 DEBUG [RS:2;jenkins-hbase4:46283] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,353 DEBUG [RS:0;jenkins-hbase4:41567] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,354 DEBUG [RS:2;jenkins-hbase4:46283] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,354 DEBUG [RS:1;jenkins-hbase4:39187] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 20:18:07,354 DEBUG [RS:0;jenkins-hbase4:41567] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,354 DEBUG [RS:1;jenkins-hbase4:39187] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,354 DEBUG [RS:0;jenkins-hbase4:41567] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,354 DEBUG [RS:1;jenkins-hbase4:39187] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,354 DEBUG [RS:0;jenkins-hbase4:41567] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,354 DEBUG [RS:1;jenkins-hbase4:39187] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,355 DEBUG [RS:1;jenkins-hbase4:39187] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:07,366 INFO [RS:2;jenkins-hbase4:46283] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,367 INFO [RS:1;jenkins-hbase4:39187] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,367 INFO [RS:2;jenkins-hbase4:46283] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,367 INFO [RS:1;jenkins-hbase4:39187] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,367 INFO [RS:2;jenkins-hbase4:46283] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,367 INFO [RS:1;jenkins-hbase4:39187] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,368 INFO [RS:0;jenkins-hbase4:41567] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,368 INFO [RS:0;jenkins-hbase4:41567] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,369 INFO [RS:0;jenkins-hbase4:41567] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,391 INFO [RS:2;jenkins-hbase4:46283] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 20:18:07,391 INFO [RS:0;jenkins-hbase4:41567] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 20:18:07,393 INFO [RS:1;jenkins-hbase4:39187] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 20:18:07,396 INFO [RS:2;jenkins-hbase4:46283] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46283,1689193085424-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,396 INFO [RS:1;jenkins-hbase4:39187] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39187,1689193085232-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,396 INFO [RS:0;jenkins-hbase4:41567] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41567,1689193085044-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:07,439 INFO [RS:1;jenkins-hbase4:39187] regionserver.Replication(203): jenkins-hbase4.apache.org,39187,1689193085232 started 2023-07-12 20:18:07,439 INFO [RS:1;jenkins-hbase4:39187] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39187,1689193085232, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39187, sessionid=0x1015b2f70320002 2023-07-12 20:18:07,440 INFO [RS:2;jenkins-hbase4:46283] regionserver.Replication(203): jenkins-hbase4.apache.org,46283,1689193085424 started 2023-07-12 20:18:07,440 INFO [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46283,1689193085424, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46283, sessionid=0x1015b2f70320003 2023-07-12 20:18:07,440 INFO [RS:0;jenkins-hbase4:41567] regionserver.Replication(203): jenkins-hbase4.apache.org,41567,1689193085044 started 2023-07-12 20:18:07,440 DEBUG [RS:2;jenkins-hbase4:46283] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 20:18:07,440 INFO [RS:0;jenkins-hbase4:41567] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41567,1689193085044, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41567, sessionid=0x1015b2f70320001 2023-07-12 20:18:07,440 DEBUG [RS:2;jenkins-hbase4:46283] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:07,441 DEBUG [RS:2;jenkins-hbase4:46283] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46283,1689193085424' 2023-07-12 20:18:07,441 DEBUG [RS:2;jenkins-hbase4:46283] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 20:18:07,441 DEBUG [RS:0;jenkins-hbase4:41567] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 20:18:07,441 DEBUG [RS:0;jenkins-hbase4:41567] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:07,442 DEBUG [RS:1;jenkins-hbase4:39187] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 20:18:07,443 DEBUG [RS:1;jenkins-hbase4:39187] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:07,443 DEBUG [RS:1;jenkins-hbase4:39187] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39187,1689193085232' 2023-07-12 20:18:07,443 DEBUG [RS:1;jenkins-hbase4:39187] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 20:18:07,442 DEBUG [RS:0;jenkins-hbase4:41567] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41567,1689193085044' 2023-07-12 20:18:07,443 DEBUG [RS:0;jenkins-hbase4:41567] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 20:18:07,444 DEBUG [RS:0;jenkins-hbase4:41567] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 20:18:07,444 DEBUG [RS:1;jenkins-hbase4:39187] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 20:18:07,445 DEBUG [RS:0;jenkins-hbase4:41567] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 20:18:07,445 DEBUG [RS:1;jenkins-hbase4:39187] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 20:18:07,445 DEBUG [RS:0;jenkins-hbase4:41567] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 20:18:07,445 DEBUG [RS:1;jenkins-hbase4:39187] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 20:18:07,445 DEBUG [RS:0;jenkins-hbase4:41567] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:07,446 DEBUG [RS:0;jenkins-hbase4:41567] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41567,1689193085044' 2023-07-12 20:18:07,446 DEBUG [RS:0;jenkins-hbase4:41567] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 20:18:07,446 DEBUG [RS:2;jenkins-hbase4:46283] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 20:18:07,445 DEBUG [RS:1;jenkins-hbase4:39187] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:07,446 DEBUG [RS:1;jenkins-hbase4:39187] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39187,1689193085232' 2023-07-12 20:18:07,446 DEBUG [RS:1;jenkins-hbase4:39187] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 20:18:07,447 DEBUG [RS:2;jenkins-hbase4:46283] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 20:18:07,447 DEBUG [RS:2;jenkins-hbase4:46283] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 20:18:07,447 DEBUG [RS:2;jenkins-hbase4:46283] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:07,447 DEBUG [RS:2;jenkins-hbase4:46283] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46283,1689193085424' 2023-07-12 20:18:07,447 DEBUG [RS:2;jenkins-hbase4:46283] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 20:18:07,448 DEBUG [RS:1;jenkins-hbase4:39187] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 20:18:07,448 DEBUG [RS:2;jenkins-hbase4:46283] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 20:18:07,448 DEBUG [RS:0;jenkins-hbase4:41567] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 20:18:07,449 DEBUG [RS:2;jenkins-hbase4:46283] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 20:18:07,449 INFO [RS:2;jenkins-hbase4:46283] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 20:18:07,449 INFO [RS:2;jenkins-hbase4:46283] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 20:18:07,449 DEBUG [RS:0;jenkins-hbase4:41567] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 20:18:07,449 INFO [RS:0;jenkins-hbase4:41567] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 20:18:07,450 INFO [RS:0;jenkins-hbase4:41567] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 20:18:07,458 DEBUG [RS:1;jenkins-hbase4:39187] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 20:18:07,459 INFO [RS:1;jenkins-hbase4:39187] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 20:18:07,459 INFO [RS:1;jenkins-hbase4:39187] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 20:18:07,464 DEBUG [jenkins-hbase4:42533] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 20:18:07,482 DEBUG [jenkins-hbase4:42533] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:07,484 DEBUG [jenkins-hbase4:42533] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:07,484 DEBUG [jenkins-hbase4:42533] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:07,484 DEBUG [jenkins-hbase4:42533] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:07,484 DEBUG [jenkins-hbase4:42533] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:07,488 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46283,1689193085424, state=OPENING 2023-07-12 20:18:07,496 DEBUG [PEWorker-1] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 20:18:07,499 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:07,500 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 20:18:07,505 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:07,568 INFO [RS:1;jenkins-hbase4:39187] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39187%2C1689193085232, suffix=, logDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/WALs/jenkins-hbase4.apache.org,39187,1689193085232, archiveDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/oldWALs, maxLogs=32 2023-07-12 20:18:07,573 INFO [RS:0;jenkins-hbase4:41567] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41567%2C1689193085044, suffix=, logDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/WALs/jenkins-hbase4.apache.org,41567,1689193085044, archiveDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/oldWALs, maxLogs=32 2023-07-12 20:18:07,575 INFO [RS:2;jenkins-hbase4:46283] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46283%2C1689193085424, suffix=, logDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/WALs/jenkins-hbase4.apache.org,46283,1689193085424, archiveDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/oldWALs, maxLogs=32 2023-07-12 20:18:07,602 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46089,DS-27017766-40ca-43a4-88da-0658c7086ccb,DISK] 2023-07-12 20:18:07,602 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38053,DS-8b674e46-cd2d-485c-80dd-f03b5eb4b7a0,DISK] 2023-07-12 20:18:07,602 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39263,DS-1cb4aa6a-03af-489e-bae3-838444f77a47,DISK] 2023-07-12 20:18:07,619 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38053,DS-8b674e46-cd2d-485c-80dd-f03b5eb4b7a0,DISK] 2023-07-12 20:18:07,619 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46089,DS-27017766-40ca-43a4-88da-0658c7086ccb,DISK] 2023-07-12 20:18:07,619 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39263,DS-1cb4aa6a-03af-489e-bae3-838444f77a47,DISK] 2023-07-12 20:18:07,620 INFO [RS:1;jenkins-hbase4:39187] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/WALs/jenkins-hbase4.apache.org,39187,1689193085232/jenkins-hbase4.apache.org%2C39187%2C1689193085232.1689193087571 2023-07-12 20:18:07,622 DEBUG [RS:1;jenkins-hbase4:39187] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46089,DS-27017766-40ca-43a4-88da-0658c7086ccb,DISK], DatanodeInfoWithStorage[127.0.0.1:39263,DS-1cb4aa6a-03af-489e-bae3-838444f77a47,DISK], DatanodeInfoWithStorage[127.0.0.1:38053,DS-8b674e46-cd2d-485c-80dd-f03b5eb4b7a0,DISK]] 2023-07-12 20:18:07,636 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38053,DS-8b674e46-cd2d-485c-80dd-f03b5eb4b7a0,DISK] 2023-07-12 20:18:07,636 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46089,DS-27017766-40ca-43a4-88da-0658c7086ccb,DISK] 2023-07-12 20:18:07,636 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39263,DS-1cb4aa6a-03af-489e-bae3-838444f77a47,DISK] 2023-07-12 20:18:07,651 INFO [RS:0;jenkins-hbase4:41567] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/WALs/jenkins-hbase4.apache.org,41567,1689193085044/jenkins-hbase4.apache.org%2C41567%2C1689193085044.1689193087575 2023-07-12 20:18:07,653 DEBUG [RS:0;jenkins-hbase4:41567] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38053,DS-8b674e46-cd2d-485c-80dd-f03b5eb4b7a0,DISK], DatanodeInfoWithStorage[127.0.0.1:39263,DS-1cb4aa6a-03af-489e-bae3-838444f77a47,DISK], DatanodeInfoWithStorage[127.0.0.1:46089,DS-27017766-40ca-43a4-88da-0658c7086ccb,DISK]] 2023-07-12 20:18:07,653 INFO [RS:2;jenkins-hbase4:46283] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/WALs/jenkins-hbase4.apache.org,46283,1689193085424/jenkins-hbase4.apache.org%2C46283%2C1689193085424.1689193087577 2023-07-12 20:18:07,654 DEBUG [RS:2;jenkins-hbase4:46283] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46089,DS-27017766-40ca-43a4-88da-0658c7086ccb,DISK], DatanodeInfoWithStorage[127.0.0.1:39263,DS-1cb4aa6a-03af-489e-bae3-838444f77a47,DISK], DatanodeInfoWithStorage[127.0.0.1:38053,DS-8b674e46-cd2d-485c-80dd-f03b5eb4b7a0,DISK]] 2023-07-12 20:18:07,662 WARN [ReadOnlyZKClient-127.0.0.1:51228@0x0fb2cca5] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 20:18:07,689 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:07,692 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 20:18:07,694 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36006, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 20:18:07,698 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42533,1689193083113] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 20:18:07,705 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36016, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 20:18:07,706 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46283] ipc.CallRunner(144): callId: 1 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:36016 deadline: 1689193147705, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:07,715 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 20:18:07,715 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 20:18:07,719 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46283%2C1689193085424.meta, suffix=.meta, logDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/WALs/jenkins-hbase4.apache.org,46283,1689193085424, archiveDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/oldWALs, maxLogs=32 2023-07-12 20:18:07,741 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38053,DS-8b674e46-cd2d-485c-80dd-f03b5eb4b7a0,DISK] 2023-07-12 20:18:07,743 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46089,DS-27017766-40ca-43a4-88da-0658c7086ccb,DISK] 2023-07-12 20:18:07,745 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39263,DS-1cb4aa6a-03af-489e-bae3-838444f77a47,DISK] 2023-07-12 20:18:07,762 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/WALs/jenkins-hbase4.apache.org,46283,1689193085424/jenkins-hbase4.apache.org%2C46283%2C1689193085424.meta.1689193087721.meta 2023-07-12 20:18:07,767 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38053,DS-8b674e46-cd2d-485c-80dd-f03b5eb4b7a0,DISK], DatanodeInfoWithStorage[127.0.0.1:46089,DS-27017766-40ca-43a4-88da-0658c7086ccb,DISK], DatanodeInfoWithStorage[127.0.0.1:39263,DS-1cb4aa6a-03af-489e-bae3-838444f77a47,DISK]] 2023-07-12 20:18:07,768 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:07,770 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 20:18:07,773 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 20:18:07,775 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 20:18:07,781 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 20:18:07,781 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:07,781 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 20:18:07,781 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 20:18:07,818 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 20:18:07,831 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/info 2023-07-12 20:18:07,831 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/info 2023-07-12 20:18:07,833 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 20:18:07,837 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:07,837 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 20:18:07,842 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/rep_barrier 2023-07-12 20:18:07,843 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/rep_barrier 2023-07-12 20:18:07,843 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 20:18:07,844 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:07,844 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 20:18:07,846 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/table 2023-07-12 20:18:07,846 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/table 2023-07-12 20:18:07,846 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 20:18:07,847 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:07,849 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740 2023-07-12 20:18:07,853 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740 2023-07-12 20:18:07,857 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 20:18:07,860 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 20:18:07,862 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9703087520, jitterRate=-0.09632955491542816}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 20:18:07,862 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 20:18:07,882 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689193087686 2023-07-12 20:18:07,905 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 20:18:07,906 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 20:18:07,907 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46283,1689193085424, state=OPEN 2023-07-12 20:18:07,910 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 20:18:07,910 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 20:18:07,915 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 20:18:07,915 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46283,1689193085424 in 405 msec 2023-07-12 20:18:07,921 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 20:18:07,921 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 628 msec 2023-07-12 20:18:07,928 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0800 sec 2023-07-12 20:18:07,928 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689193087928, completionTime=-1 2023-07-12 20:18:07,928 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 20:18:07,929 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 20:18:07,998 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 20:18:07,998 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689193147998 2023-07-12 20:18:07,998 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689193207998 2023-07-12 20:18:07,998 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 69 msec 2023-07-12 20:18:08,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42533,1689193083113-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:08,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42533,1689193083113-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:08,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42533,1689193083113-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:08,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:42533, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:08,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:08,036 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 20:18:08,044 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 20:18:08,046 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:08,056 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 20:18:08,059 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:08,064 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 20:18:08,084 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/hbase/namespace/455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:08,089 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/hbase/namespace/455649b011ddbbda985bd47060a43b64 empty. 2023-07-12 20:18:08,091 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/hbase/namespace/455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:08,091 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 20:18:08,148 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:08,151 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 455649b011ddbbda985bd47060a43b64, NAME => 'hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:08,189 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:08,189 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 455649b011ddbbda985bd47060a43b64, disabling compactions & flushes 2023-07-12 20:18:08,189 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:08,189 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:08,189 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. after waiting 0 ms 2023-07-12 20:18:08,189 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:08,189 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:08,189 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 455649b011ddbbda985bd47060a43b64: 2023-07-12 20:18:08,204 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 20:18:08,226 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42533,1689193083113] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:08,229 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689193088207"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193088207"}]},"ts":"1689193088207"} 2023-07-12 20:18:08,234 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42533,1689193083113] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 20:18:08,238 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:08,240 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 20:18:08,247 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:08,248 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495 empty. 2023-07-12 20:18:08,249 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:08,249 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 20:18:08,282 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 20:18:08,289 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 20:18:08,296 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:08,299 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193088289"}]},"ts":"1689193088289"} 2023-07-12 20:18:08,299 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => aa1db639fdc668f9efd7f5e68d620495, NAME => 'hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:08,311 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 20:18:08,320 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:08,320 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:08,320 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:08,320 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:08,320 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:08,322 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=455649b011ddbbda985bd47060a43b64, ASSIGN}] 2023-07-12 20:18:08,331 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=455649b011ddbbda985bd47060a43b64, ASSIGN 2023-07-12 20:18:08,335 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=455649b011ddbbda985bd47060a43b64, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39187,1689193085232; forceNewPlan=false, retain=false 2023-07-12 20:18:08,340 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:08,340 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing aa1db639fdc668f9efd7f5e68d620495, disabling compactions & flushes 2023-07-12 20:18:08,340 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:08,340 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:08,340 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. after waiting 0 ms 2023-07-12 20:18:08,340 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:08,340 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:08,340 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for aa1db639fdc668f9efd7f5e68d620495: 2023-07-12 20:18:08,345 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 20:18:08,347 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689193088347"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193088347"}]},"ts":"1689193088347"} 2023-07-12 20:18:08,354 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 20:18:08,356 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 20:18:08,356 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193088356"}]},"ts":"1689193088356"} 2023-07-12 20:18:08,363 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 20:18:08,369 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:08,369 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:08,369 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:08,369 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:08,369 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:08,369 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=aa1db639fdc668f9efd7f5e68d620495, ASSIGN}] 2023-07-12 20:18:08,374 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=aa1db639fdc668f9efd7f5e68d620495, ASSIGN 2023-07-12 20:18:08,376 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=aa1db639fdc668f9efd7f5e68d620495, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41567,1689193085044; forceNewPlan=false, retain=false 2023-07-12 20:18:08,376 INFO [jenkins-hbase4:42533] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 20:18:08,378 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=aa1db639fdc668f9efd7f5e68d620495, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:08,378 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=455649b011ddbbda985bd47060a43b64, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:08,379 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689193088378"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193088378"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193088378"}]},"ts":"1689193088378"} 2023-07-12 20:18:08,379 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689193088378"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193088378"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193088378"}]},"ts":"1689193088378"} 2023-07-12 20:18:08,384 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure aa1db639fdc668f9efd7f5e68d620495, server=jenkins-hbase4.apache.org,41567,1689193085044}] 2023-07-12 20:18:08,386 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=6, state=RUNNABLE; OpenRegionProcedure 455649b011ddbbda985bd47060a43b64, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:08,539 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:08,540 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 20:18:08,541 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:08,542 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 20:18:08,545 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51698, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 20:18:08,546 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48390, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 20:18:08,551 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:08,551 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:08,551 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aa1db639fdc668f9efd7f5e68d620495, NAME => 'hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:08,551 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 455649b011ddbbda985bd47060a43b64, NAME => 'hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:08,552 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 20:18:08,552 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:08,552 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. service=MultiRowMutationService 2023-07-12 20:18:08,552 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:08,553 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 20:18:08,553 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:08,553 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:08,553 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:08,553 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:08,553 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:08,553 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:08,556 INFO [StoreOpener-455649b011ddbbda985bd47060a43b64-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:08,558 DEBUG [StoreOpener-455649b011ddbbda985bd47060a43b64-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64/info 2023-07-12 20:18:08,558 DEBUG [StoreOpener-455649b011ddbbda985bd47060a43b64-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64/info 2023-07-12 20:18:08,559 INFO [StoreOpener-455649b011ddbbda985bd47060a43b64-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 455649b011ddbbda985bd47060a43b64 columnFamilyName info 2023-07-12 20:18:08,560 INFO [StoreOpener-455649b011ddbbda985bd47060a43b64-1] regionserver.HStore(310): Store=455649b011ddbbda985bd47060a43b64/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:08,563 INFO [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:08,564 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:08,565 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:08,565 DEBUG [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/m 2023-07-12 20:18:08,565 DEBUG [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/m 2023-07-12 20:18:08,566 INFO [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aa1db639fdc668f9efd7f5e68d620495 columnFamilyName m 2023-07-12 20:18:08,567 INFO [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] regionserver.HStore(310): Store=aa1db639fdc668f9efd7f5e68d620495/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:08,568 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:08,569 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:08,570 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:08,575 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:08,576 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:08,577 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 455649b011ddbbda985bd47060a43b64; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10419361600, jitterRate=-0.02962133288383484}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:08,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 455649b011ddbbda985bd47060a43b64: 2023-07-12 20:18:08,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:08,586 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened aa1db639fdc668f9efd7f5e68d620495; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@6a3a3c94, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:08,586 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for aa1db639fdc668f9efd7f5e68d620495: 2023-07-12 20:18:08,593 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64., pid=9, masterSystemTime=1689193088541 2023-07-12 20:18:08,599 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495., pid=8, masterSystemTime=1689193088539 2023-07-12 20:18:08,605 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:08,608 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=455649b011ddbbda985bd47060a43b64, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:08,608 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689193088607"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193088607"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193088607"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193088607"}]},"ts":"1689193088607"} 2023-07-12 20:18:08,609 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:08,611 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=aa1db639fdc668f9efd7f5e68d620495, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:08,617 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:08,617 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:08,617 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689193088611"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193088611"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193088611"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193088611"}]},"ts":"1689193088611"} 2023-07-12 20:18:08,624 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=6 2023-07-12 20:18:08,624 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=6, state=SUCCESS; OpenRegionProcedure 455649b011ddbbda985bd47060a43b64, server=jenkins-hbase4.apache.org,39187,1689193085232 in 227 msec 2023-07-12 20:18:08,626 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-12 20:18:08,628 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure aa1db639fdc668f9efd7f5e68d620495, server=jenkins-hbase4.apache.org,41567,1689193085044 in 237 msec 2023-07-12 20:18:08,644 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-12 20:18:08,645 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=455649b011ddbbda985bd47060a43b64, ASSIGN in 302 msec 2023-07-12 20:18:08,645 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-12 20:18:08,645 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=aa1db639fdc668f9efd7f5e68d620495, ASSIGN in 268 msec 2023-07-12 20:18:08,647 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 20:18:08,647 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193088647"}]},"ts":"1689193088647"} 2023-07-12 20:18:08,647 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 20:18:08,648 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193088647"}]},"ts":"1689193088647"} 2023-07-12 20:18:08,651 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 20:18:08,652 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 20:18:08,659 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 20:18:08,659 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 20:18:08,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 20:18:08,663 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 20:18:08,663 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:08,665 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 612 msec 2023-07-12 20:18:08,666 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 433 msec 2023-07-12 20:18:08,691 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 20:18:08,699 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48396, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 20:18:08,717 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 20:18:08,737 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 20:18:08,742 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42533,1689193083113] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 20:18:08,745 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 38 msec 2023-07-12 20:18:08,749 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51714, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 20:18:08,754 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42533,1689193083113] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 20:18:08,754 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42533,1689193083113] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 20:18:08,756 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 20:18:08,785 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 20:18:08,792 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 38 msec 2023-07-12 20:18:08,808 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 20:18:08,811 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 20:18:08,811 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.134sec 2023-07-12 20:18:08,815 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-12 20:18:08,817 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 20:18:08,817 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 20:18:08,819 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42533,1689193083113-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 20:18:08,820 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42533,1689193083113-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 20:18:08,841 DEBUG [Listener at localhost/36071] zookeeper.ReadOnlyZKClient(139): Connect 0x59ea51e3 to 127.0.0.1:51228 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:08,846 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 20:18:08,849 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:08,849 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42533,1689193083113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:08,856 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42533,1689193083113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 20:18:08,859 DEBUG [Listener at localhost/36071] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@115c6e79, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:08,863 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42533,1689193083113] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 20:18:08,886 DEBUG [hconnection-0x5fc06702-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 20:18:08,911 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36020, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 20:18:08,923 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,42533,1689193083113 2023-07-12 20:18:08,924 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:08,936 DEBUG [Listener at localhost/36071] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 20:18:08,940 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46566, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 20:18:08,957 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 20:18:08,957 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:08,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-12 20:18:08,963 DEBUG [Listener at localhost/36071] zookeeper.ReadOnlyZKClient(139): Connect 0x5a6bf6db to 127.0.0.1:51228 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:08,968 DEBUG [Listener at localhost/36071] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7105eb4d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:08,969 INFO [Listener at localhost/36071] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:51228 2023-07-12 20:18:08,973 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 20:18:08,974 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1015b2f7032000a connected 2023-07-12 20:18:09,008 INFO [Listener at localhost/36071] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=423, OpenFileDescriptor=676, MaxFileDescriptor=60000, SystemLoadAverage=589, ProcessCount=172, AvailableMemoryMB=5079 2023-07-12 20:18:09,010 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-12 20:18:09,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:09,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:09,092 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-12 20:18:09,110 INFO [Listener at localhost/36071] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 20:18:09,110 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:09,110 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:09,111 INFO [Listener at localhost/36071] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 20:18:09,111 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:09,111 INFO [Listener at localhost/36071] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 20:18:09,111 INFO [Listener at localhost/36071] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 20:18:09,115 INFO [Listener at localhost/36071] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43429 2023-07-12 20:18:09,116 INFO [Listener at localhost/36071] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 20:18:09,118 DEBUG [Listener at localhost/36071] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 20:18:09,119 INFO [Listener at localhost/36071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:09,125 INFO [Listener at localhost/36071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:09,128 INFO [Listener at localhost/36071] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43429 connecting to ZooKeeper ensemble=127.0.0.1:51228 2023-07-12 20:18:09,139 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:434290x0, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 20:18:09,140 DEBUG [Listener at localhost/36071] zookeeper.ZKUtil(162): regionserver:434290x0, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 20:18:09,141 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43429-0x1015b2f7032000b connected 2023-07-12 20:18:09,142 DEBUG [Listener at localhost/36071] zookeeper.ZKUtil(162): regionserver:43429-0x1015b2f7032000b, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-12 20:18:09,143 DEBUG [Listener at localhost/36071] zookeeper.ZKUtil(164): regionserver:43429-0x1015b2f7032000b, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 20:18:09,146 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43429 2023-07-12 20:18:09,147 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43429 2023-07-12 20:18:09,147 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43429 2023-07-12 20:18:09,150 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43429 2023-07-12 20:18:09,150 DEBUG [Listener at localhost/36071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43429 2023-07-12 20:18:09,153 INFO [Listener at localhost/36071] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 20:18:09,153 INFO [Listener at localhost/36071] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 20:18:09,154 INFO [Listener at localhost/36071] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 20:18:09,154 INFO [Listener at localhost/36071] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 20:18:09,154 INFO [Listener at localhost/36071] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 20:18:09,155 INFO [Listener at localhost/36071] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 20:18:09,155 INFO [Listener at localhost/36071] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 20:18:09,156 INFO [Listener at localhost/36071] http.HttpServer(1146): Jetty bound to port 36787 2023-07-12 20:18:09,156 INFO [Listener at localhost/36071] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 20:18:09,163 INFO [Listener at localhost/36071] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:09,163 INFO [Listener at localhost/36071] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@f357cde{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/hadoop.log.dir/,AVAILABLE} 2023-07-12 20:18:09,164 INFO [Listener at localhost/36071] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:09,164 INFO [Listener at localhost/36071] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@68aba549{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 20:18:09,309 INFO [Listener at localhost/36071] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 20:18:09,310 INFO [Listener at localhost/36071] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 20:18:09,310 INFO [Listener at localhost/36071] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 20:18:09,311 INFO [Listener at localhost/36071] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 20:18:09,312 INFO [Listener at localhost/36071] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:09,313 INFO [Listener at localhost/36071] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@16958b7c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/java.io.tmpdir/jetty-0_0_0_0-36787-hbase-server-2_4_18-SNAPSHOT_jar-_-any-676528531683438840/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:09,315 INFO [Listener at localhost/36071] server.AbstractConnector(333): Started ServerConnector@1d8aa3aa{HTTP/1.1, (http/1.1)}{0.0.0.0:36787} 2023-07-12 20:18:09,315 INFO [Listener at localhost/36071] server.Server(415): Started @12055ms 2023-07-12 20:18:09,319 INFO [RS:3;jenkins-hbase4:43429] regionserver.HRegionServer(951): ClusterId : 66677ade-6bf9-45c3-bb71-6001d75a9e7b 2023-07-12 20:18:09,320 DEBUG [RS:3;jenkins-hbase4:43429] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 20:18:09,323 DEBUG [RS:3;jenkins-hbase4:43429] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 20:18:09,323 DEBUG [RS:3;jenkins-hbase4:43429] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 20:18:09,326 DEBUG [RS:3;jenkins-hbase4:43429] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 20:18:09,330 DEBUG [RS:3;jenkins-hbase4:43429] zookeeper.ReadOnlyZKClient(139): Connect 0x5663bd11 to 127.0.0.1:51228 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:09,343 DEBUG [RS:3;jenkins-hbase4:43429] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@58a449ad, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:09,343 DEBUG [RS:3;jenkins-hbase4:43429] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c9bd661, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 20:18:09,357 DEBUG [RS:3;jenkins-hbase4:43429] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:43429 2023-07-12 20:18:09,357 INFO [RS:3;jenkins-hbase4:43429] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 20:18:09,357 INFO [RS:3;jenkins-hbase4:43429] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 20:18:09,357 DEBUG [RS:3;jenkins-hbase4:43429] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 20:18:09,358 INFO [RS:3;jenkins-hbase4:43429] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42533,1689193083113 with isa=jenkins-hbase4.apache.org/172.31.14.131:43429, startcode=1689193089109 2023-07-12 20:18:09,359 DEBUG [RS:3;jenkins-hbase4:43429] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 20:18:09,367 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48939, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 20:18:09,368 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42533] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:09,368 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42533,1689193083113] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 20:18:09,368 DEBUG [RS:3;jenkins-hbase4:43429] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6 2023-07-12 20:18:09,368 DEBUG [RS:3;jenkins-hbase4:43429] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41485 2023-07-12 20:18:09,368 DEBUG [RS:3;jenkins-hbase4:43429] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46167 2023-07-12 20:18:09,374 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:09,374 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:09,374 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:09,374 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:09,375 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43429,1689193089109] 2023-07-12 20:18:09,375 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:09,375 DEBUG [RS:3;jenkins-hbase4:43429] zookeeper.ZKUtil(162): regionserver:43429-0x1015b2f7032000b, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:09,375 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:09,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:09,376 WARN [RS:3;jenkins-hbase4:43429] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 20:18:09,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:09,376 INFO [RS:3;jenkins-hbase4:43429] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 20:18:09,376 DEBUG [RS:3;jenkins-hbase4:43429] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/WALs/jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:09,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:09,377 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:09,377 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:09,377 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:09,377 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:09,377 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:09,379 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42533,1689193083113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:09,381 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:09,381 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:09,384 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42533,1689193083113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 20:18:09,390 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42533,1689193083113] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-12 20:18:09,391 DEBUG [RS:3;jenkins-hbase4:43429] zookeeper.ZKUtil(162): regionserver:43429-0x1015b2f7032000b, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:09,392 DEBUG [RS:3;jenkins-hbase4:43429] zookeeper.ZKUtil(162): regionserver:43429-0x1015b2f7032000b, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:09,392 DEBUG [RS:3;jenkins-hbase4:43429] zookeeper.ZKUtil(162): regionserver:43429-0x1015b2f7032000b, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:09,393 DEBUG [RS:3;jenkins-hbase4:43429] zookeeper.ZKUtil(162): regionserver:43429-0x1015b2f7032000b, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:09,396 DEBUG [RS:3;jenkins-hbase4:43429] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 20:18:09,397 INFO [RS:3;jenkins-hbase4:43429] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 20:18:09,407 INFO [RS:3;jenkins-hbase4:43429] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 20:18:09,408 INFO [RS:3;jenkins-hbase4:43429] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 20:18:09,408 INFO [RS:3;jenkins-hbase4:43429] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:09,408 INFO [RS:3;jenkins-hbase4:43429] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 20:18:09,411 INFO [RS:3;jenkins-hbase4:43429] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:09,411 DEBUG [RS:3;jenkins-hbase4:43429] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:09,411 DEBUG [RS:3;jenkins-hbase4:43429] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:09,411 DEBUG [RS:3;jenkins-hbase4:43429] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:09,411 DEBUG [RS:3;jenkins-hbase4:43429] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:09,411 DEBUG [RS:3;jenkins-hbase4:43429] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:09,411 DEBUG [RS:3;jenkins-hbase4:43429] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 20:18:09,412 DEBUG [RS:3;jenkins-hbase4:43429] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:09,412 DEBUG [RS:3;jenkins-hbase4:43429] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:09,412 DEBUG [RS:3;jenkins-hbase4:43429] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:09,412 DEBUG [RS:3;jenkins-hbase4:43429] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:09,416 INFO [RS:3;jenkins-hbase4:43429] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:09,416 INFO [RS:3;jenkins-hbase4:43429] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:09,416 INFO [RS:3;jenkins-hbase4:43429] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:09,428 INFO [RS:3;jenkins-hbase4:43429] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 20:18:09,428 INFO [RS:3;jenkins-hbase4:43429] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43429,1689193089109-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:09,447 INFO [RS:3;jenkins-hbase4:43429] regionserver.Replication(203): jenkins-hbase4.apache.org,43429,1689193089109 started 2023-07-12 20:18:09,447 INFO [RS:3;jenkins-hbase4:43429] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43429,1689193089109, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43429, sessionid=0x1015b2f7032000b 2023-07-12 20:18:09,447 DEBUG [RS:3;jenkins-hbase4:43429] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 20:18:09,447 DEBUG [RS:3;jenkins-hbase4:43429] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:09,448 DEBUG [RS:3;jenkins-hbase4:43429] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43429,1689193089109' 2023-07-12 20:18:09,448 DEBUG [RS:3;jenkins-hbase4:43429] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 20:18:09,452 DEBUG [RS:3;jenkins-hbase4:43429] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 20:18:09,453 DEBUG [RS:3;jenkins-hbase4:43429] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 20:18:09,453 DEBUG [RS:3;jenkins-hbase4:43429] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 20:18:09,453 DEBUG [RS:3;jenkins-hbase4:43429] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:09,453 DEBUG [RS:3;jenkins-hbase4:43429] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43429,1689193089109' 2023-07-12 20:18:09,453 DEBUG [RS:3;jenkins-hbase4:43429] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 20:18:09,453 DEBUG [RS:3;jenkins-hbase4:43429] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 20:18:09,454 DEBUG [RS:3;jenkins-hbase4:43429] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 20:18:09,454 INFO [RS:3;jenkins-hbase4:43429] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 20:18:09,454 INFO [RS:3;jenkins-hbase4:43429] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 20:18:09,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:09,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:09,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:09,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:09,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:09,491 DEBUG [hconnection-0x5275ffcd-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 20:18:09,505 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36028, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 20:18:09,512 DEBUG [hconnection-0x5275ffcd-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 20:18:09,515 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51716, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 20:18:09,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:09,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:09,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42533] to rsgroup master 2023-07-12 20:18:09,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:09,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:46566 deadline: 1689194289531, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. 2023-07-12 20:18:09,534 WARN [Listener at localhost/36071] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:09,536 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:09,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:09,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:09,539 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:46283], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:09,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:09,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:09,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:09,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:09,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:09,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:09,557 INFO [RS:3;jenkins-hbase4:43429] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43429%2C1689193089109, suffix=, logDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/WALs/jenkins-hbase4.apache.org,43429,1689193089109, archiveDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/oldWALs, maxLogs=32 2023-07-12 20:18:09,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:09,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:09,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:09,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:09,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:09,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:09,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567] to rsgroup Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:09,600 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46089,DS-27017766-40ca-43a4-88da-0658c7086ccb,DISK] 2023-07-12 20:18:09,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:09,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:09,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:09,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:09,609 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39263,DS-1cb4aa6a-03af-489e-bae3-838444f77a47,DISK] 2023-07-12 20:18:09,613 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38053,DS-8b674e46-cd2d-485c-80dd-f03b5eb4b7a0,DISK] 2023-07-12 20:18:09,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminServer(238): Moving server region 455649b011ddbbda985bd47060a43b64, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:09,617 INFO [RS:3;jenkins-hbase4:43429] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/WALs/jenkins-hbase4.apache.org,43429,1689193089109/jenkins-hbase4.apache.org%2C43429%2C1689193089109.1689193089558 2023-07-12 20:18:09,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=455649b011ddbbda985bd47060a43b64, REOPEN/MOVE 2023-07-12 20:18:09,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminServer(238): Moving server region aa1db639fdc668f9efd7f5e68d620495, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:09,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=aa1db639fdc668f9efd7f5e68d620495, REOPEN/MOVE 2023-07-12 20:18:09,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-12 20:18:09,626 DEBUG [RS:3;jenkins-hbase4:43429] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46089,DS-27017766-40ca-43a4-88da-0658c7086ccb,DISK], DatanodeInfoWithStorage[127.0.0.1:38053,DS-8b674e46-cd2d-485c-80dd-f03b5eb4b7a0,DISK], DatanodeInfoWithStorage[127.0.0.1:39263,DS-1cb4aa6a-03af-489e-bae3-838444f77a47,DISK]] 2023-07-12 20:18:09,626 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=455649b011ddbbda985bd47060a43b64, REOPEN/MOVE 2023-07-12 20:18:09,626 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=aa1db639fdc668f9efd7f5e68d620495, REOPEN/MOVE 2023-07-12 20:18:09,628 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=aa1db639fdc668f9efd7f5e68d620495, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:09,628 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=455649b011ddbbda985bd47060a43b64, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:09,629 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689193089628"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193089628"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193089628"}]},"ts":"1689193089628"} 2023-07-12 20:18:09,629 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689193089628"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193089628"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193089628"}]},"ts":"1689193089628"} 2023-07-12 20:18:09,632 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure 455649b011ddbbda985bd47060a43b64, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:09,635 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; CloseRegionProcedure aa1db639fdc668f9efd7f5e68d620495, server=jenkins-hbase4.apache.org,41567,1689193085044}] 2023-07-12 20:18:09,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:09,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:09,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 455649b011ddbbda985bd47060a43b64, disabling compactions & flushes 2023-07-12 20:18:09,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing aa1db639fdc668f9efd7f5e68d620495, disabling compactions & flushes 2023-07-12 20:18:09,800 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:09,800 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:09,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:09,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:09,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. after waiting 0 ms 2023-07-12 20:18:09,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. after waiting 0 ms 2023-07-12 20:18:09,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:09,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:09,801 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 455649b011ddbbda985bd47060a43b64 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-12 20:18:09,801 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing aa1db639fdc668f9efd7f5e68d620495 1/1 column families, dataSize=1.38 KB heapSize=2.36 KB 2023-07-12 20:18:09,923 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/.tmp/m/1d9a0266a7c54d468008bb4fa2345577 2023-07-12 20:18:09,929 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64/.tmp/info/fbf3c9b199b34ae0843ec8d79454096d 2023-07-12 20:18:09,983 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64/.tmp/info/fbf3c9b199b34ae0843ec8d79454096d as hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64/info/fbf3c9b199b34ae0843ec8d79454096d 2023-07-12 20:18:09,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/.tmp/m/1d9a0266a7c54d468008bb4fa2345577 as hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/m/1d9a0266a7c54d468008bb4fa2345577 2023-07-12 20:18:10,008 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64/info/fbf3c9b199b34ae0843ec8d79454096d, entries=2, sequenceid=6, filesize=4.8 K 2023-07-12 20:18:10,008 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/m/1d9a0266a7c54d468008bb4fa2345577, entries=3, sequenceid=9, filesize=5.2 K 2023-07-12 20:18:10,011 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 455649b011ddbbda985bd47060a43b64 in 210ms, sequenceid=6, compaction requested=false 2023-07-12 20:18:10,011 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1414, heapSize ~2.34 KB/2400, currentSize=0 B/0 for aa1db639fdc668f9efd7f5e68d620495 in 210ms, sequenceid=9, compaction requested=false 2023-07-12 20:18:10,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 20:18:10,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 20:18:10,026 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-12 20:18:10,026 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-12 20:18:10,027 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:10,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 20:18:10,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 455649b011ddbbda985bd47060a43b64: 2023-07-12 20:18:10,028 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 455649b011ddbbda985bd47060a43b64 move to jenkins-hbase4.apache.org,43429,1689193089109 record at close sequenceid=6 2023-07-12 20:18:10,028 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:10,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for aa1db639fdc668f9efd7f5e68d620495: 2023-07-12 20:18:10,028 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding aa1db639fdc668f9efd7f5e68d620495 move to jenkins-hbase4.apache.org,43429,1689193089109 record at close sequenceid=9 2023-07-12 20:18:10,031 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:10,034 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=455649b011ddbbda985bd47060a43b64, regionState=CLOSED 2023-07-12 20:18:10,035 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689193090034"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193090034"}]},"ts":"1689193090034"} 2023-07-12 20:18:10,035 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:10,037 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=aa1db639fdc668f9efd7f5e68d620495, regionState=CLOSED 2023-07-12 20:18:10,038 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689193090037"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193090037"}]},"ts":"1689193090037"} 2023-07-12 20:18:10,046 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-12 20:18:10,046 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; CloseRegionProcedure 455649b011ddbbda985bd47060a43b64, server=jenkins-hbase4.apache.org,39187,1689193085232 in 408 msec 2023-07-12 20:18:10,047 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-12 20:18:10,047 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=455649b011ddbbda985bd47060a43b64, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43429,1689193089109; forceNewPlan=false, retain=false 2023-07-12 20:18:10,047 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; CloseRegionProcedure aa1db639fdc668f9efd7f5e68d620495, server=jenkins-hbase4.apache.org,41567,1689193085044 in 406 msec 2023-07-12 20:18:10,048 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=aa1db639fdc668f9efd7f5e68d620495, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43429,1689193089109; forceNewPlan=false, retain=false 2023-07-12 20:18:10,049 INFO [jenkins-hbase4:42533] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 20:18:10,049 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=455649b011ddbbda985bd47060a43b64, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:10,049 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689193090049"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193090049"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193090049"}]},"ts":"1689193090049"} 2023-07-12 20:18:10,050 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=aa1db639fdc668f9efd7f5e68d620495, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:10,050 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689193090050"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193090050"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193090050"}]},"ts":"1689193090050"} 2023-07-12 20:18:10,052 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=12, state=RUNNABLE; OpenRegionProcedure 455649b011ddbbda985bd47060a43b64, server=jenkins-hbase4.apache.org,43429,1689193089109}] 2023-07-12 20:18:10,057 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=13, state=RUNNABLE; OpenRegionProcedure aa1db639fdc668f9efd7f5e68d620495, server=jenkins-hbase4.apache.org,43429,1689193089109}] 2023-07-12 20:18:10,205 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:10,205 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 20:18:10,209 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60160, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 20:18:10,216 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:10,216 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aa1db639fdc668f9efd7f5e68d620495, NAME => 'hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:10,217 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 20:18:10,217 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. service=MultiRowMutationService 2023-07-12 20:18:10,217 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 20:18:10,217 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:10,217 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:10,217 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:10,217 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:10,223 INFO [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:10,225 DEBUG [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/m 2023-07-12 20:18:10,225 DEBUG [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/m 2023-07-12 20:18:10,226 INFO [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aa1db639fdc668f9efd7f5e68d620495 columnFamilyName m 2023-07-12 20:18:10,246 DEBUG [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] regionserver.HStore(539): loaded hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/m/1d9a0266a7c54d468008bb4fa2345577 2023-07-12 20:18:10,247 INFO [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] regionserver.HStore(310): Store=aa1db639fdc668f9efd7f5e68d620495/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:10,249 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:10,252 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:10,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:10,259 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened aa1db639fdc668f9efd7f5e68d620495; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@7535ebb9, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:10,259 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for aa1db639fdc668f9efd7f5e68d620495: 2023-07-12 20:18:10,261 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495., pid=17, masterSystemTime=1689193090205 2023-07-12 20:18:10,266 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:10,267 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:10,267 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:10,267 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 455649b011ddbbda985bd47060a43b64, NAME => 'hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:10,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:10,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:10,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:10,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:10,268 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=aa1db639fdc668f9efd7f5e68d620495, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:10,269 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689193090268"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193090268"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193090268"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193090268"}]},"ts":"1689193090268"} 2023-07-12 20:18:10,272 INFO [StoreOpener-455649b011ddbbda985bd47060a43b64-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:10,274 DEBUG [StoreOpener-455649b011ddbbda985bd47060a43b64-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64/info 2023-07-12 20:18:10,274 DEBUG [StoreOpener-455649b011ddbbda985bd47060a43b64-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64/info 2023-07-12 20:18:10,275 INFO [StoreOpener-455649b011ddbbda985bd47060a43b64-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 455649b011ddbbda985bd47060a43b64 columnFamilyName info 2023-07-12 20:18:10,275 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=13 2023-07-12 20:18:10,278 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=13, state=SUCCESS; OpenRegionProcedure aa1db639fdc668f9efd7f5e68d620495, server=jenkins-hbase4.apache.org,43429,1689193089109 in 214 msec 2023-07-12 20:18:10,280 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=aa1db639fdc668f9efd7f5e68d620495, REOPEN/MOVE in 655 msec 2023-07-12 20:18:10,302 DEBUG [StoreOpener-455649b011ddbbda985bd47060a43b64-1] regionserver.HStore(539): loaded hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64/info/fbf3c9b199b34ae0843ec8d79454096d 2023-07-12 20:18:10,302 INFO [StoreOpener-455649b011ddbbda985bd47060a43b64-1] regionserver.HStore(310): Store=455649b011ddbbda985bd47060a43b64/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:10,303 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:10,305 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:10,311 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:10,312 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 455649b011ddbbda985bd47060a43b64; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11875315040, jitterRate=0.10597489774227142}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:10,312 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 455649b011ddbbda985bd47060a43b64: 2023-07-12 20:18:10,314 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64., pid=16, masterSystemTime=1689193090205 2023-07-12 20:18:10,317 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:10,317 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:10,318 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=455649b011ddbbda985bd47060a43b64, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:10,318 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689193090318"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193090318"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193090318"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193090318"}]},"ts":"1689193090318"} 2023-07-12 20:18:10,325 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-07-12 20:18:10,325 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; OpenRegionProcedure 455649b011ddbbda985bd47060a43b64, server=jenkins-hbase4.apache.org,43429,1689193089109 in 269 msec 2023-07-12 20:18:10,328 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=455649b011ddbbda985bd47060a43b64, REOPEN/MOVE in 708 msec 2023-07-12 20:18:10,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-12 20:18:10,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39187,1689193085232, jenkins-hbase4.apache.org,41567,1689193085044] are moved back to default 2023-07-12 20:18:10,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:10,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:10,625 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41567] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:51716 deadline: 1689193150625, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43429 startCode=1689193089109. As of locationSeqNum=9. 2023-07-12 20:18:10,731 DEBUG [hconnection-0x5275ffcd-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 20:18:10,733 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58598, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 20:18:10,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:10,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:10,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:10,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:10,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:10,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:10,772 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:10,774 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41567] ipc.CallRunner(144): callId: 48 service: ClientService methodName: ExecService size: 619 connection: 172.31.14.131:51714 deadline: 1689193150774, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43429 startCode=1689193089109. As of locationSeqNum=9. 2023-07-12 20:18:10,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 18 2023-07-12 20:18:10,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-12 20:18:10,878 DEBUG [PEWorker-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 20:18:10,881 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58612, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 20:18:10,884 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:10,885 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:10,885 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:10,885 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:10,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-12 20:18:10,890 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 20:18:10,896 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:10,896 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:10,896 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:10,896 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:10,896 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:10,896 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2 empty. 2023-07-12 20:18:10,896 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20 empty. 2023-07-12 20:18:10,897 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d empty. 2023-07-12 20:18:10,897 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d empty. 2023-07-12 20:18:10,897 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7 empty. 2023-07-12 20:18:10,897 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:10,897 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:10,898 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:10,898 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:10,898 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:10,898 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 20:18:10,918 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:10,920 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => e308efaca36c81f63f626f6725eb8a2d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:10,921 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => b7ae6f8dee8e8dc1394228d7ab5ddf20, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:10,921 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9ac77ace0ae4fbdad5bf7568a67a6af2, NAME => 'Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:11,006 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:11,006 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 9ac77ace0ae4fbdad5bf7568a67a6af2, disabling compactions & flushes 2023-07-12 20:18:11,006 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:11,006 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:11,006 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. after waiting 0 ms 2023-07-12 20:18:11,006 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:11,006 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:11,006 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 9ac77ace0ae4fbdad5bf7568a67a6af2: 2023-07-12 20:18:11,007 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 43d6730976614a44b6347298afd55d5d, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:11,012 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:11,014 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing b7ae6f8dee8e8dc1394228d7ab5ddf20, disabling compactions & flushes 2023-07-12 20:18:11,014 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:11,014 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:11,014 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:11,015 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing e308efaca36c81f63f626f6725eb8a2d, disabling compactions & flushes 2023-07-12 20:18:11,015 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. after waiting 0 ms 2023-07-12 20:18:11,015 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:11,015 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:11,015 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:11,015 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:11,015 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for b7ae6f8dee8e8dc1394228d7ab5ddf20: 2023-07-12 20:18:11,015 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. after waiting 0 ms 2023-07-12 20:18:11,016 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:11,016 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:11,016 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for e308efaca36c81f63f626f6725eb8a2d: 2023-07-12 20:18:11,016 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 131bd4fc840a5a1afe1b095f2acbf0b7, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:11,044 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:11,044 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 131bd4fc840a5a1afe1b095f2acbf0b7, disabling compactions & flushes 2023-07-12 20:18:11,044 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:11,044 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:11,045 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 43d6730976614a44b6347298afd55d5d, disabling compactions & flushes 2023-07-12 20:18:11,045 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:11,045 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:11,045 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. after waiting 0 ms 2023-07-12 20:18:11,045 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:11,045 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:11,045 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. after waiting 0 ms 2023-07-12 20:18:11,045 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:11,045 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:11,046 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 131bd4fc840a5a1afe1b095f2acbf0b7: 2023-07-12 20:18:11,046 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:11,046 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 43d6730976614a44b6347298afd55d5d: 2023-07-12 20:18:11,050 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 20:18:11,052 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193091051"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193091051"}]},"ts":"1689193091051"} 2023-07-12 20:18:11,052 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193091051"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193091051"}]},"ts":"1689193091051"} 2023-07-12 20:18:11,052 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193091051"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193091051"}]},"ts":"1689193091051"} 2023-07-12 20:18:11,052 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193091051"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193091051"}]},"ts":"1689193091051"} 2023-07-12 20:18:11,053 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689193090764.43d6730976614a44b6347298afd55d5d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193091051"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193091051"}]},"ts":"1689193091051"} 2023-07-12 20:18:11,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-12 20:18:11,120 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 20:18:11,121 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 20:18:11,122 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193091121"}]},"ts":"1689193091121"} 2023-07-12 20:18:11,124 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-12 20:18:11,138 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:11,139 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:11,139 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:11,139 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:11,139 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ac77ace0ae4fbdad5bf7568a67a6af2, ASSIGN}, {pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ae6f8dee8e8dc1394228d7ab5ddf20, ASSIGN}, {pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e308efaca36c81f63f626f6725eb8a2d, ASSIGN}, {pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43d6730976614a44b6347298afd55d5d, ASSIGN}, {pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=131bd4fc840a5a1afe1b095f2acbf0b7, ASSIGN}] 2023-07-12 20:18:11,142 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ae6f8dee8e8dc1394228d7ab5ddf20, ASSIGN 2023-07-12 20:18:11,143 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ac77ace0ae4fbdad5bf7568a67a6af2, ASSIGN 2023-07-12 20:18:11,144 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43d6730976614a44b6347298afd55d5d, ASSIGN 2023-07-12 20:18:11,144 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e308efaca36c81f63f626f6725eb8a2d, ASSIGN 2023-07-12 20:18:11,145 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ae6f8dee8e8dc1394228d7ab5ddf20, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46283,1689193085424; forceNewPlan=false, retain=false 2023-07-12 20:18:11,145 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=131bd4fc840a5a1afe1b095f2acbf0b7, ASSIGN 2023-07-12 20:18:11,145 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ac77ace0ae4fbdad5bf7568a67a6af2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43429,1689193089109; forceNewPlan=false, retain=false 2023-07-12 20:18:11,145 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43d6730976614a44b6347298afd55d5d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43429,1689193089109; forceNewPlan=false, retain=false 2023-07-12 20:18:11,145 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e308efaca36c81f63f626f6725eb8a2d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43429,1689193089109; forceNewPlan=false, retain=false 2023-07-12 20:18:11,147 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=131bd4fc840a5a1afe1b095f2acbf0b7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46283,1689193085424; forceNewPlan=false, retain=false 2023-07-12 20:18:11,295 INFO [jenkins-hbase4:42533] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 20:18:11,298 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=b7ae6f8dee8e8dc1394228d7ab5ddf20, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:11,299 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=9ac77ace0ae4fbdad5bf7568a67a6af2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:11,299 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=43d6730976614a44b6347298afd55d5d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:11,299 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=e308efaca36c81f63f626f6725eb8a2d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:11,298 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=131bd4fc840a5a1afe1b095f2acbf0b7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:11,299 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689193090764.43d6730976614a44b6347298afd55d5d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193091298"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193091298"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193091298"}]},"ts":"1689193091298"} 2023-07-12 20:18:11,299 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193091298"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193091298"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193091298"}]},"ts":"1689193091298"} 2023-07-12 20:18:11,299 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193091298"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193091298"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193091298"}]},"ts":"1689193091298"} 2023-07-12 20:18:11,299 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193091298"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193091298"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193091298"}]},"ts":"1689193091298"} 2023-07-12 20:18:11,299 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193091298"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193091298"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193091298"}]},"ts":"1689193091298"} 2023-07-12 20:18:11,302 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=22, state=RUNNABLE; OpenRegionProcedure 43d6730976614a44b6347298afd55d5d, server=jenkins-hbase4.apache.org,43429,1689193089109}] 2023-07-12 20:18:11,304 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=19, state=RUNNABLE; OpenRegionProcedure 9ac77ace0ae4fbdad5bf7568a67a6af2, server=jenkins-hbase4.apache.org,43429,1689193089109}] 2023-07-12 20:18:11,306 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=21, state=RUNNABLE; OpenRegionProcedure e308efaca36c81f63f626f6725eb8a2d, server=jenkins-hbase4.apache.org,43429,1689193089109}] 2023-07-12 20:18:11,310 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=23, state=RUNNABLE; OpenRegionProcedure 131bd4fc840a5a1afe1b095f2acbf0b7, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:11,314 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=20, state=RUNNABLE; OpenRegionProcedure b7ae6f8dee8e8dc1394228d7ab5ddf20, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:11,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-12 20:18:11,461 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:11,461 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 43d6730976614a44b6347298afd55d5d, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 20:18:11,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:11,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:11,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:11,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:11,464 INFO [StoreOpener-43d6730976614a44b6347298afd55d5d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:11,467 DEBUG [StoreOpener-43d6730976614a44b6347298afd55d5d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d/f 2023-07-12 20:18:11,467 DEBUG [StoreOpener-43d6730976614a44b6347298afd55d5d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d/f 2023-07-12 20:18:11,467 INFO [StoreOpener-43d6730976614a44b6347298afd55d5d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 43d6730976614a44b6347298afd55d5d columnFamilyName f 2023-07-12 20:18:11,468 INFO [StoreOpener-43d6730976614a44b6347298afd55d5d-1] regionserver.HStore(310): Store=43d6730976614a44b6347298afd55d5d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:11,469 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:11,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b7ae6f8dee8e8dc1394228d7ab5ddf20, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 20:18:11,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:11,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:11,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:11,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:11,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:11,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:11,473 INFO [StoreOpener-b7ae6f8dee8e8dc1394228d7ab5ddf20-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:11,476 DEBUG [StoreOpener-b7ae6f8dee8e8dc1394228d7ab5ddf20-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20/f 2023-07-12 20:18:11,476 DEBUG [StoreOpener-b7ae6f8dee8e8dc1394228d7ab5ddf20-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20/f 2023-07-12 20:18:11,477 INFO [StoreOpener-b7ae6f8dee8e8dc1394228d7ab5ddf20-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b7ae6f8dee8e8dc1394228d7ab5ddf20 columnFamilyName f 2023-07-12 20:18:11,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:11,478 INFO [StoreOpener-b7ae6f8dee8e8dc1394228d7ab5ddf20-1] regionserver.HStore(310): Store=b7ae6f8dee8e8dc1394228d7ab5ddf20/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:11,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:11,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:11,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:11,484 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 43d6730976614a44b6347298afd55d5d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9719307360, jitterRate=-0.09481896460056305}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:11,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 43d6730976614a44b6347298afd55d5d: 2023-07-12 20:18:11,489 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d., pid=24, masterSystemTime=1689193091455 2023-07-12 20:18:11,490 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:11,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:11,492 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:11,492 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:11,493 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=43d6730976614a44b6347298afd55d5d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:11,493 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e308efaca36c81f63f626f6725eb8a2d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 20:18:11,493 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689193090764.43d6730976614a44b6347298afd55d5d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193091492"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193091492"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193091492"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193091492"}]},"ts":"1689193091492"} 2023-07-12 20:18:11,493 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:11,493 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:11,493 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:11,493 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:11,494 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:11,496 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b7ae6f8dee8e8dc1394228d7ab5ddf20; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10386726080, jitterRate=-0.03266075253486633}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:11,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b7ae6f8dee8e8dc1394228d7ab5ddf20: 2023-07-12 20:18:11,498 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20., pid=28, masterSystemTime=1689193091464 2023-07-12 20:18:11,503 INFO [StoreOpener-e308efaca36c81f63f626f6725eb8a2d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:11,505 DEBUG [StoreOpener-e308efaca36c81f63f626f6725eb8a2d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d/f 2023-07-12 20:18:11,505 DEBUG [StoreOpener-e308efaca36c81f63f626f6725eb8a2d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d/f 2023-07-12 20:18:11,506 INFO [StoreOpener-e308efaca36c81f63f626f6725eb8a2d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e308efaca36c81f63f626f6725eb8a2d columnFamilyName f 2023-07-12 20:18:11,507 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=b7ae6f8dee8e8dc1394228d7ab5ddf20, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:11,507 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193091507"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193091507"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193091507"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193091507"}]},"ts":"1689193091507"} 2023-07-12 20:18:11,507 INFO [StoreOpener-e308efaca36c81f63f626f6725eb8a2d-1] regionserver.HStore(310): Store=e308efaca36c81f63f626f6725eb8a2d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:11,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:11,509 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:11,510 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:11,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 131bd4fc840a5a1afe1b095f2acbf0b7, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 20:18:11,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:11,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:11,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:11,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:11,511 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=22 2023-07-12 20:18:11,511 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:11,511 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=22, state=SUCCESS; OpenRegionProcedure 43d6730976614a44b6347298afd55d5d, server=jenkins-hbase4.apache.org,43429,1689193089109 in 195 msec 2023-07-12 20:18:11,512 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:11,513 INFO [StoreOpener-131bd4fc840a5a1afe1b095f2acbf0b7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:11,515 DEBUG [StoreOpener-131bd4fc840a5a1afe1b095f2acbf0b7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7/f 2023-07-12 20:18:11,515 DEBUG [StoreOpener-131bd4fc840a5a1afe1b095f2acbf0b7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7/f 2023-07-12 20:18:11,516 INFO [StoreOpener-131bd4fc840a5a1afe1b095f2acbf0b7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 131bd4fc840a5a1afe1b095f2acbf0b7 columnFamilyName f 2023-07-12 20:18:11,516 INFO [StoreOpener-131bd4fc840a5a1afe1b095f2acbf0b7-1] regionserver.HStore(310): Store=131bd4fc840a5a1afe1b095f2acbf0b7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:11,517 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43d6730976614a44b6347298afd55d5d, ASSIGN in 372 msec 2023-07-12 20:18:11,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:11,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:11,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:11,520 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=20 2023-07-12 20:18:11,520 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=20, state=SUCCESS; OpenRegionProcedure b7ae6f8dee8e8dc1394228d7ab5ddf20, server=jenkins-hbase4.apache.org,46283,1689193085424 in 198 msec 2023-07-12 20:18:11,522 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ae6f8dee8e8dc1394228d7ab5ddf20, ASSIGN in 381 msec 2023-07-12 20:18:11,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:11,523 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e308efaca36c81f63f626f6725eb8a2d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10092966560, jitterRate=-0.06001923978328705}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:11,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e308efaca36c81f63f626f6725eb8a2d: 2023-07-12 20:18:11,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:11,525 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d., pid=26, masterSystemTime=1689193091455 2023-07-12 20:18:11,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:11,527 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:11,527 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:11,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9ac77ace0ae4fbdad5bf7568a67a6af2, NAME => 'Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 20:18:11,527 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=e308efaca36c81f63f626f6725eb8a2d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:11,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:11,527 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193091527"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193091527"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193091527"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193091527"}]},"ts":"1689193091527"} 2023-07-12 20:18:11,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:11,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:11,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:11,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:11,529 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 131bd4fc840a5a1afe1b095f2acbf0b7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10123917600, jitterRate=-0.05713669955730438}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:11,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 131bd4fc840a5a1afe1b095f2acbf0b7: 2023-07-12 20:18:11,531 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7., pid=27, masterSystemTime=1689193091464 2023-07-12 20:18:11,531 INFO [StoreOpener-9ac77ace0ae4fbdad5bf7568a67a6af2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:11,534 DEBUG [StoreOpener-9ac77ace0ae4fbdad5bf7568a67a6af2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2/f 2023-07-12 20:18:11,534 DEBUG [StoreOpener-9ac77ace0ae4fbdad5bf7568a67a6af2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2/f 2023-07-12 20:18:11,536 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:11,536 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:11,536 INFO [StoreOpener-9ac77ace0ae4fbdad5bf7568a67a6af2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9ac77ace0ae4fbdad5bf7568a67a6af2 columnFamilyName f 2023-07-12 20:18:11,537 INFO [StoreOpener-9ac77ace0ae4fbdad5bf7568a67a6af2-1] regionserver.HStore(310): Store=9ac77ace0ae4fbdad5bf7568a67a6af2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:11,538 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=131bd4fc840a5a1afe1b095f2acbf0b7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:11,539 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193091538"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193091538"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193091538"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193091538"}]},"ts":"1689193091538"} 2023-07-12 20:18:11,540 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=21 2023-07-12 20:18:11,540 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:11,540 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=21, state=SUCCESS; OpenRegionProcedure e308efaca36c81f63f626f6725eb8a2d, server=jenkins-hbase4.apache.org,43429,1689193089109 in 225 msec 2023-07-12 20:18:11,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:11,545 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e308efaca36c81f63f626f6725eb8a2d, ASSIGN in 403 msec 2023-07-12 20:18:11,549 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:11,553 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:11,554 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9ac77ace0ae4fbdad5bf7568a67a6af2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11047609440, jitterRate=0.028888806700706482}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:11,554 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9ac77ace0ae4fbdad5bf7568a67a6af2: 2023-07-12 20:18:11,555 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2., pid=25, masterSystemTime=1689193091455 2023-07-12 20:18:11,560 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:11,560 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:11,560 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=23 2023-07-12 20:18:11,561 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=23, state=SUCCESS; OpenRegionProcedure 131bd4fc840a5a1afe1b095f2acbf0b7, server=jenkins-hbase4.apache.org,46283,1689193085424 in 235 msec 2023-07-12 20:18:11,561 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=9ac77ace0ae4fbdad5bf7568a67a6af2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:11,562 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193091561"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193091561"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193091561"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193091561"}]},"ts":"1689193091561"} 2023-07-12 20:18:11,564 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=131bd4fc840a5a1afe1b095f2acbf0b7, ASSIGN in 422 msec 2023-07-12 20:18:11,568 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=19 2023-07-12 20:18:11,568 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=19, state=SUCCESS; OpenRegionProcedure 9ac77ace0ae4fbdad5bf7568a67a6af2, server=jenkins-hbase4.apache.org,43429,1689193089109 in 260 msec 2023-07-12 20:18:11,571 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=18 2023-07-12 20:18:11,571 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ac77ace0ae4fbdad5bf7568a67a6af2, ASSIGN in 429 msec 2023-07-12 20:18:11,572 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 20:18:11,572 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193091572"}]},"ts":"1689193091572"} 2023-07-12 20:18:11,574 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-12 20:18:11,578 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 20:18:11,581 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 810 msec 2023-07-12 20:18:11,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-12 20:18:11,894 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 18 completed 2023-07-12 20:18:11,894 DEBUG [Listener at localhost/36071] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-12 20:18:11,896 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:11,906 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-12 20:18:11,907 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:11,908 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-12 20:18:11,908 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:11,913 DEBUG [Listener at localhost/36071] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 20:18:11,932 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48428, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 20:18:11,936 DEBUG [Listener at localhost/36071] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 20:18:11,946 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54610, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 20:18:11,947 DEBUG [Listener at localhost/36071] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 20:18:11,952 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58614, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 20:18:11,954 DEBUG [Listener at localhost/36071] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 20:18:11,968 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47650, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 20:18:11,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:11,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 20:18:11,985 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:11,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:11,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:12,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:12,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:12,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:12,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:12,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminServer(345): Moving region 9ac77ace0ae4fbdad5bf7568a67a6af2 to RSGroup Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:12,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:12,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:12,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:12,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:12,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:12,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ac77ace0ae4fbdad5bf7568a67a6af2, REOPEN/MOVE 2023-07-12 20:18:12,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminServer(345): Moving region b7ae6f8dee8e8dc1394228d7ab5ddf20 to RSGroup Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:12,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:12,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:12,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:12,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:12,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:12,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ae6f8dee8e8dc1394228d7ab5ddf20, REOPEN/MOVE 2023-07-12 20:18:12,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminServer(345): Moving region e308efaca36c81f63f626f6725eb8a2d to RSGroup Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:12,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:12,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:12,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:12,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:12,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:12,028 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ac77ace0ae4fbdad5bf7568a67a6af2, REOPEN/MOVE 2023-07-12 20:18:12,029 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ae6f8dee8e8dc1394228d7ab5ddf20, REOPEN/MOVE 2023-07-12 20:18:12,032 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=9ac77ace0ae4fbdad5bf7568a67a6af2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:12,032 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193092032"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193092032"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193092032"}]},"ts":"1689193092032"} 2023-07-12 20:18:12,033 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=b7ae6f8dee8e8dc1394228d7ab5ddf20, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:12,033 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193092033"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193092033"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193092033"}]},"ts":"1689193092033"} 2023-07-12 20:18:12,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e308efaca36c81f63f626f6725eb8a2d, REOPEN/MOVE 2023-07-12 20:18:12,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminServer(345): Moving region 43d6730976614a44b6347298afd55d5d to RSGroup Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:12,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:12,036 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e308efaca36c81f63f626f6725eb8a2d, REOPEN/MOVE 2023-07-12 20:18:12,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:12,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:12,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:12,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:12,037 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=29, state=RUNNABLE; CloseRegionProcedure 9ac77ace0ae4fbdad5bf7568a67a6af2, server=jenkins-hbase4.apache.org,43429,1689193089109}] 2023-07-12 20:18:12,038 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=e308efaca36c81f63f626f6725eb8a2d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:12,038 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193092038"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193092038"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193092038"}]},"ts":"1689193092038"} 2023-07-12 20:18:12,039 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=30, state=RUNNABLE; CloseRegionProcedure b7ae6f8dee8e8dc1394228d7ab5ddf20, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:12,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43d6730976614a44b6347298afd55d5d, REOPEN/MOVE 2023-07-12 20:18:12,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminServer(345): Moving region 131bd4fc840a5a1afe1b095f2acbf0b7 to RSGroup Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:12,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:12,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:12,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:12,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:12,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:12,062 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43d6730976614a44b6347298afd55d5d, REOPEN/MOVE 2023-07-12 20:18:12,063 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=31, state=RUNNABLE; CloseRegionProcedure e308efaca36c81f63f626f6725eb8a2d, server=jenkins-hbase4.apache.org,43429,1689193089109}] 2023-07-12 20:18:12,065 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=43d6730976614a44b6347298afd55d5d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:12,066 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689193090764.43d6730976614a44b6347298afd55d5d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193092065"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193092065"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193092065"}]},"ts":"1689193092065"} 2023-07-12 20:18:12,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=131bd4fc840a5a1afe1b095f2acbf0b7, REOPEN/MOVE 2023-07-12 20:18:12,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_409149434, current retry=0 2023-07-12 20:18:12,069 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=131bd4fc840a5a1afe1b095f2acbf0b7, REOPEN/MOVE 2023-07-12 20:18:12,071 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=131bd4fc840a5a1afe1b095f2acbf0b7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:12,071 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193092071"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193092071"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193092071"}]},"ts":"1689193092071"} 2023-07-12 20:18:12,072 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=32, state=RUNNABLE; CloseRegionProcedure 43d6730976614a44b6347298afd55d5d, server=jenkins-hbase4.apache.org,43429,1689193089109}] 2023-07-12 20:18:12,075 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=35, state=RUNNABLE; CloseRegionProcedure 131bd4fc840a5a1afe1b095f2acbf0b7, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:12,213 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:12,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 43d6730976614a44b6347298afd55d5d, disabling compactions & flushes 2023-07-12 20:18:12,215 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:12,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:12,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. after waiting 0 ms 2023-07-12 20:18:12,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:12,216 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:12,217 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b7ae6f8dee8e8dc1394228d7ab5ddf20, disabling compactions & flushes 2023-07-12 20:18:12,217 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:12,217 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:12,217 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. after waiting 0 ms 2023-07-12 20:18:12,217 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:12,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:12,222 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:12,222 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:12,222 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 43d6730976614a44b6347298afd55d5d: 2023-07-12 20:18:12,222 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 43d6730976614a44b6347298afd55d5d move to jenkins-hbase4.apache.org,39187,1689193085232 record at close sequenceid=2 2023-07-12 20:18:12,222 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:12,223 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b7ae6f8dee8e8dc1394228d7ab5ddf20: 2023-07-12 20:18:12,223 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b7ae6f8dee8e8dc1394228d7ab5ddf20 move to jenkins-hbase4.apache.org,39187,1689193085232 record at close sequenceid=2 2023-07-12 20:18:12,224 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:12,224 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:12,225 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9ac77ace0ae4fbdad5bf7568a67a6af2, disabling compactions & flushes 2023-07-12 20:18:12,225 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:12,225 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:12,225 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. after waiting 0 ms 2023-07-12 20:18:12,225 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:12,227 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=43d6730976614a44b6347298afd55d5d, regionState=CLOSED 2023-07-12 20:18:12,227 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689193090764.43d6730976614a44b6347298afd55d5d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193092226"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193092226"}]},"ts":"1689193092226"} 2023-07-12 20:18:12,227 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:12,228 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:12,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 131bd4fc840a5a1afe1b095f2acbf0b7, disabling compactions & flushes 2023-07-12 20:18:12,228 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:12,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:12,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. after waiting 0 ms 2023-07-12 20:18:12,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:12,229 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=b7ae6f8dee8e8dc1394228d7ab5ddf20, regionState=CLOSED 2023-07-12 20:18:12,229 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193092228"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193092228"}]},"ts":"1689193092228"} 2023-07-12 20:18:12,235 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:12,239 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:12,240 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:12,240 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9ac77ace0ae4fbdad5bf7568a67a6af2: 2023-07-12 20:18:12,240 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9ac77ace0ae4fbdad5bf7568a67a6af2 move to jenkins-hbase4.apache.org,39187,1689193085232 record at close sequenceid=2 2023-07-12 20:18:12,240 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:12,241 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 131bd4fc840a5a1afe1b095f2acbf0b7: 2023-07-12 20:18:12,241 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 131bd4fc840a5a1afe1b095f2acbf0b7 move to jenkins-hbase4.apache.org,41567,1689193085044 record at close sequenceid=2 2023-07-12 20:18:12,243 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=32 2023-07-12 20:18:12,243 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=32, state=SUCCESS; CloseRegionProcedure 43d6730976614a44b6347298afd55d5d, server=jenkins-hbase4.apache.org,43429,1689193089109 in 158 msec 2023-07-12 20:18:12,244 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=30 2023-07-12 20:18:12,244 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; CloseRegionProcedure b7ae6f8dee8e8dc1394228d7ab5ddf20, server=jenkins-hbase4.apache.org,46283,1689193085424 in 192 msec 2023-07-12 20:18:12,244 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43d6730976614a44b6347298afd55d5d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39187,1689193085232; forceNewPlan=false, retain=false 2023-07-12 20:18:12,245 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:12,245 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:12,246 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e308efaca36c81f63f626f6725eb8a2d, disabling compactions & flushes 2023-07-12 20:18:12,246 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:12,246 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:12,246 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. after waiting 0 ms 2023-07-12 20:18:12,246 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:12,246 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ae6f8dee8e8dc1394228d7ab5ddf20, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39187,1689193085232; forceNewPlan=false, retain=false 2023-07-12 20:18:12,246 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=9ac77ace0ae4fbdad5bf7568a67a6af2, regionState=CLOSED 2023-07-12 20:18:12,246 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193092246"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193092246"}]},"ts":"1689193092246"} 2023-07-12 20:18:12,248 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:12,252 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=29 2023-07-12 20:18:12,252 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=131bd4fc840a5a1afe1b095f2acbf0b7, regionState=CLOSED 2023-07-12 20:18:12,252 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=29, state=SUCCESS; CloseRegionProcedure 9ac77ace0ae4fbdad5bf7568a67a6af2, server=jenkins-hbase4.apache.org,43429,1689193089109 in 212 msec 2023-07-12 20:18:12,253 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193092252"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193092252"}]},"ts":"1689193092252"} 2023-07-12 20:18:12,253 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ac77ace0ae4fbdad5bf7568a67a6af2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39187,1689193085232; forceNewPlan=false, retain=false 2023-07-12 20:18:12,260 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=35 2023-07-12 20:18:12,260 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=35, state=SUCCESS; CloseRegionProcedure 131bd4fc840a5a1afe1b095f2acbf0b7, server=jenkins-hbase4.apache.org,46283,1689193085424 in 180 msec 2023-07-12 20:18:12,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:12,261 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=131bd4fc840a5a1afe1b095f2acbf0b7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41567,1689193085044; forceNewPlan=false, retain=false 2023-07-12 20:18:12,261 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:12,261 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e308efaca36c81f63f626f6725eb8a2d: 2023-07-12 20:18:12,261 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e308efaca36c81f63f626f6725eb8a2d move to jenkins-hbase4.apache.org,41567,1689193085044 record at close sequenceid=2 2023-07-12 20:18:12,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:12,263 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=e308efaca36c81f63f626f6725eb8a2d, regionState=CLOSED 2023-07-12 20:18:12,263 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193092263"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193092263"}]},"ts":"1689193092263"} 2023-07-12 20:18:12,268 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=31 2023-07-12 20:18:12,268 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=31, state=SUCCESS; CloseRegionProcedure e308efaca36c81f63f626f6725eb8a2d, server=jenkins-hbase4.apache.org,43429,1689193089109 in 202 msec 2023-07-12 20:18:12,269 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e308efaca36c81f63f626f6725eb8a2d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41567,1689193085044; forceNewPlan=false, retain=false 2023-07-12 20:18:12,394 INFO [jenkins-hbase4:42533] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 20:18:12,395 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=131bd4fc840a5a1afe1b095f2acbf0b7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:12,395 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=43d6730976614a44b6347298afd55d5d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:12,395 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193092395"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193092395"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193092395"}]},"ts":"1689193092395"} 2023-07-12 20:18:12,395 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689193090764.43d6730976614a44b6347298afd55d5d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193092395"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193092395"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193092395"}]},"ts":"1689193092395"} 2023-07-12 20:18:12,395 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=e308efaca36c81f63f626f6725eb8a2d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:12,395 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=b7ae6f8dee8e8dc1394228d7ab5ddf20, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:12,395 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=9ac77ace0ae4fbdad5bf7568a67a6af2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:12,396 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193092395"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193092395"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193092395"}]},"ts":"1689193092395"} 2023-07-12 20:18:12,396 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193092395"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193092395"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193092395"}]},"ts":"1689193092395"} 2023-07-12 20:18:12,396 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193092395"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193092395"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193092395"}]},"ts":"1689193092395"} 2023-07-12 20:18:12,399 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=35, state=RUNNABLE; OpenRegionProcedure 131bd4fc840a5a1afe1b095f2acbf0b7, server=jenkins-hbase4.apache.org,41567,1689193085044}] 2023-07-12 20:18:12,400 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=32, state=RUNNABLE; OpenRegionProcedure 43d6730976614a44b6347298afd55d5d, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:12,402 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=30, state=RUNNABLE; OpenRegionProcedure b7ae6f8dee8e8dc1394228d7ab5ddf20, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:12,404 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=31, state=RUNNABLE; OpenRegionProcedure e308efaca36c81f63f626f6725eb8a2d, server=jenkins-hbase4.apache.org,41567,1689193085044}] 2023-07-12 20:18:12,405 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=29, state=RUNNABLE; OpenRegionProcedure 9ac77ace0ae4fbdad5bf7568a67a6af2, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:12,564 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:12,564 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e308efaca36c81f63f626f6725eb8a2d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 20:18:12,565 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:12,565 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:12,565 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:12,565 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:12,565 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:12,565 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9ac77ace0ae4fbdad5bf7568a67a6af2, NAME => 'Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 20:18:12,566 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:12,566 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:12,566 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:12,566 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:12,569 INFO [StoreOpener-e308efaca36c81f63f626f6725eb8a2d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:12,570 INFO [StoreOpener-9ac77ace0ae4fbdad5bf7568a67a6af2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:12,571 DEBUG [StoreOpener-e308efaca36c81f63f626f6725eb8a2d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d/f 2023-07-12 20:18:12,571 DEBUG [StoreOpener-e308efaca36c81f63f626f6725eb8a2d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d/f 2023-07-12 20:18:12,571 DEBUG [StoreOpener-9ac77ace0ae4fbdad5bf7568a67a6af2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2/f 2023-07-12 20:18:12,571 DEBUG [StoreOpener-9ac77ace0ae4fbdad5bf7568a67a6af2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2/f 2023-07-12 20:18:12,572 INFO [StoreOpener-e308efaca36c81f63f626f6725eb8a2d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e308efaca36c81f63f626f6725eb8a2d columnFamilyName f 2023-07-12 20:18:12,572 INFO [StoreOpener-9ac77ace0ae4fbdad5bf7568a67a6af2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9ac77ace0ae4fbdad5bf7568a67a6af2 columnFamilyName f 2023-07-12 20:18:12,573 INFO [StoreOpener-e308efaca36c81f63f626f6725eb8a2d-1] regionserver.HStore(310): Store=e308efaca36c81f63f626f6725eb8a2d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:12,573 INFO [StoreOpener-9ac77ace0ae4fbdad5bf7568a67a6af2-1] regionserver.HStore(310): Store=9ac77ace0ae4fbdad5bf7568a67a6af2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:12,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:12,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:12,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:12,577 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:12,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:12,582 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:12,583 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e308efaca36c81f63f626f6725eb8a2d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10449968480, jitterRate=-0.02677084505558014}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:12,583 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e308efaca36c81f63f626f6725eb8a2d: 2023-07-12 20:18:12,583 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9ac77ace0ae4fbdad5bf7568a67a6af2; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10417396320, jitterRate=-0.02980436384677887}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:12,583 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9ac77ace0ae4fbdad5bf7568a67a6af2: 2023-07-12 20:18:12,585 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2., pid=43, masterSystemTime=1689193092555 2023-07-12 20:18:12,585 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d., pid=42, masterSystemTime=1689193092554 2023-07-12 20:18:12,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:12,589 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:12,589 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:12,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 43d6730976614a44b6347298afd55d5d, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 20:18:12,590 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=9ac77ace0ae4fbdad5bf7568a67a6af2, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:12,590 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:12,590 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193092590"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193092590"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193092590"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193092590"}]},"ts":"1689193092590"} 2023-07-12 20:18:12,590 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:12,590 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:12,590 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:12,590 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:12,590 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:12,590 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:12,590 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 131bd4fc840a5a1afe1b095f2acbf0b7, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 20:18:12,591 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=e308efaca36c81f63f626f6725eb8a2d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:12,591 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193092591"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193092591"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193092591"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193092591"}]},"ts":"1689193092591"} 2023-07-12 20:18:12,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:12,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:12,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:12,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:12,594 INFO [StoreOpener-43d6730976614a44b6347298afd55d5d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:12,595 INFO [StoreOpener-131bd4fc840a5a1afe1b095f2acbf0b7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:12,596 DEBUG [StoreOpener-43d6730976614a44b6347298afd55d5d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d/f 2023-07-12 20:18:12,596 DEBUG [StoreOpener-43d6730976614a44b6347298afd55d5d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d/f 2023-07-12 20:18:12,597 INFO [StoreOpener-43d6730976614a44b6347298afd55d5d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 43d6730976614a44b6347298afd55d5d columnFamilyName f 2023-07-12 20:18:12,598 DEBUG [StoreOpener-131bd4fc840a5a1afe1b095f2acbf0b7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7/f 2023-07-12 20:18:12,598 INFO [StoreOpener-43d6730976614a44b6347298afd55d5d-1] regionserver.HStore(310): Store=43d6730976614a44b6347298afd55d5d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:12,598 DEBUG [StoreOpener-131bd4fc840a5a1afe1b095f2acbf0b7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7/f 2023-07-12 20:18:12,599 INFO [StoreOpener-131bd4fc840a5a1afe1b095f2acbf0b7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 131bd4fc840a5a1afe1b095f2acbf0b7 columnFamilyName f 2023-07-12 20:18:12,600 INFO [StoreOpener-131bd4fc840a5a1afe1b095f2acbf0b7-1] regionserver.HStore(310): Store=131bd4fc840a5a1afe1b095f2acbf0b7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:12,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:12,601 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=29 2023-07-12 20:18:12,601 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=29, state=SUCCESS; OpenRegionProcedure 9ac77ace0ae4fbdad5bf7568a67a6af2, server=jenkins-hbase4.apache.org,39187,1689193085232 in 188 msec 2023-07-12 20:18:12,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:12,602 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:12,603 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=31 2023-07-12 20:18:12,603 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=31, state=SUCCESS; OpenRegionProcedure e308efaca36c81f63f626f6725eb8a2d, server=jenkins-hbase4.apache.org,41567,1689193085044 in 191 msec 2023-07-12 20:18:12,604 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:12,605 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ac77ace0ae4fbdad5bf7568a67a6af2, REOPEN/MOVE in 589 msec 2023-07-12 20:18:12,607 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=31, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e308efaca36c81f63f626f6725eb8a2d, REOPEN/MOVE in 577 msec 2023-07-12 20:18:12,607 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:12,610 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:12,610 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 43d6730976614a44b6347298afd55d5d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10635400640, jitterRate=-0.009501129388809204}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:12,610 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 43d6730976614a44b6347298afd55d5d: 2023-07-12 20:18:12,612 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d., pid=40, masterSystemTime=1689193092555 2023-07-12 20:18:12,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:12,616 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:12,616 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:12,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b7ae6f8dee8e8dc1394228d7ab5ddf20, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 20:18:12,617 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:12,617 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:12,617 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 131bd4fc840a5a1afe1b095f2acbf0b7; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9833296000, jitterRate=-0.08420294523239136}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:12,617 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:12,617 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 131bd4fc840a5a1afe1b095f2acbf0b7: 2023-07-12 20:18:12,617 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:12,618 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7., pid=39, masterSystemTime=1689193092554 2023-07-12 20:18:12,620 INFO [StoreOpener-b7ae6f8dee8e8dc1394228d7ab5ddf20-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:12,621 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=43d6730976614a44b6347298afd55d5d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:12,621 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689193090764.43d6730976614a44b6347298afd55d5d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193092621"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193092621"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193092621"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193092621"}]},"ts":"1689193092621"} 2023-07-12 20:18:12,621 DEBUG [StoreOpener-b7ae6f8dee8e8dc1394228d7ab5ddf20-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20/f 2023-07-12 20:18:12,621 DEBUG [StoreOpener-b7ae6f8dee8e8dc1394228d7ab5ddf20-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20/f 2023-07-12 20:18:12,622 INFO [StoreOpener-b7ae6f8dee8e8dc1394228d7ab5ddf20-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b7ae6f8dee8e8dc1394228d7ab5ddf20 columnFamilyName f 2023-07-12 20:18:12,622 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:12,622 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:12,623 INFO [StoreOpener-b7ae6f8dee8e8dc1394228d7ab5ddf20-1] regionserver.HStore(310): Store=b7ae6f8dee8e8dc1394228d7ab5ddf20/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:12,624 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=131bd4fc840a5a1afe1b095f2acbf0b7, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:12,624 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193092624"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193092624"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193092624"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193092624"}]},"ts":"1689193092624"} 2023-07-12 20:18:12,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:12,627 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:12,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:12,632 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=32 2023-07-12 20:18:12,632 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=32, state=SUCCESS; OpenRegionProcedure 43d6730976614a44b6347298afd55d5d, server=jenkins-hbase4.apache.org,39187,1689193085232 in 225 msec 2023-07-12 20:18:12,634 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b7ae6f8dee8e8dc1394228d7ab5ddf20; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12077941280, jitterRate=0.12484593689441681}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:12,634 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b7ae6f8dee8e8dc1394228d7ab5ddf20: 2023-07-12 20:18:12,634 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=35 2023-07-12 20:18:12,634 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=35, state=SUCCESS; OpenRegionProcedure 131bd4fc840a5a1afe1b095f2acbf0b7, server=jenkins-hbase4.apache.org,41567,1689193085044 in 228 msec 2023-07-12 20:18:12,635 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20., pid=41, masterSystemTime=1689193092555 2023-07-12 20:18:12,635 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43d6730976614a44b6347298afd55d5d, REOPEN/MOVE in 595 msec 2023-07-12 20:18:12,643 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=35, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=131bd4fc840a5a1afe1b095f2acbf0b7, REOPEN/MOVE in 574 msec 2023-07-12 20:18:12,643 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:12,643 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:12,644 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=b7ae6f8dee8e8dc1394228d7ab5ddf20, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:12,644 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193092644"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193092644"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193092644"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193092644"}]},"ts":"1689193092644"} 2023-07-12 20:18:12,649 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=30 2023-07-12 20:18:12,649 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=30, state=SUCCESS; OpenRegionProcedure b7ae6f8dee8e8dc1394228d7ab5ddf20, server=jenkins-hbase4.apache.org,39187,1689193085232 in 244 msec 2023-07-12 20:18:12,652 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ae6f8dee8e8dc1394228d7ab5ddf20, REOPEN/MOVE in 634 msec 2023-07-12 20:18:13,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] procedure.ProcedureSyncWait(216): waitFor pid=29 2023-07-12 20:18:13,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_409149434. 2023-07-12 20:18:13,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:13,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:13,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:13,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:13,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 20:18:13,078 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:13,085 INFO [Listener at localhost/36071] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:13,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:13,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=44, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:13,101 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193093101"}]},"ts":"1689193093101"} 2023-07-12 20:18:13,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-12 20:18:13,103 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-12 20:18:13,105 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-12 20:18:13,109 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ac77ace0ae4fbdad5bf7568a67a6af2, UNASSIGN}, {pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ae6f8dee8e8dc1394228d7ab5ddf20, UNASSIGN}, {pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e308efaca36c81f63f626f6725eb8a2d, UNASSIGN}, {pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43d6730976614a44b6347298afd55d5d, UNASSIGN}, {pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=131bd4fc840a5a1afe1b095f2acbf0b7, UNASSIGN}] 2023-07-12 20:18:13,111 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ac77ace0ae4fbdad5bf7568a67a6af2, UNASSIGN 2023-07-12 20:18:13,111 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ae6f8dee8e8dc1394228d7ab5ddf20, UNASSIGN 2023-07-12 20:18:13,111 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43d6730976614a44b6347298afd55d5d, UNASSIGN 2023-07-12 20:18:13,111 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e308efaca36c81f63f626f6725eb8a2d, UNASSIGN 2023-07-12 20:18:13,112 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=131bd4fc840a5a1afe1b095f2acbf0b7, UNASSIGN 2023-07-12 20:18:13,112 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=9ac77ace0ae4fbdad5bf7568a67a6af2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:13,112 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193093112"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193093112"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193093112"}]},"ts":"1689193093112"} 2023-07-12 20:18:13,116 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=b7ae6f8dee8e8dc1394228d7ab5ddf20, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:13,116 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=43d6730976614a44b6347298afd55d5d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:13,116 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193093116"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193093116"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193093116"}]},"ts":"1689193093116"} 2023-07-12 20:18:13,116 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=e308efaca36c81f63f626f6725eb8a2d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:13,117 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689193090764.43d6730976614a44b6347298afd55d5d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193093116"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193093116"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193093116"}]},"ts":"1689193093116"} 2023-07-12 20:18:13,117 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=131bd4fc840a5a1afe1b095f2acbf0b7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:13,117 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193093116"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193093116"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193093116"}]},"ts":"1689193093116"} 2023-07-12 20:18:13,117 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193093116"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193093116"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193093116"}]},"ts":"1689193093116"} 2023-07-12 20:18:13,118 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=45, state=RUNNABLE; CloseRegionProcedure 9ac77ace0ae4fbdad5bf7568a67a6af2, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:13,119 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=46, state=RUNNABLE; CloseRegionProcedure b7ae6f8dee8e8dc1394228d7ab5ddf20, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:13,120 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=52, ppid=48, state=RUNNABLE; CloseRegionProcedure 43d6730976614a44b6347298afd55d5d, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:13,122 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=47, state=RUNNABLE; CloseRegionProcedure e308efaca36c81f63f626f6725eb8a2d, server=jenkins-hbase4.apache.org,41567,1689193085044}] 2023-07-12 20:18:13,128 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=49, state=RUNNABLE; CloseRegionProcedure 131bd4fc840a5a1afe1b095f2acbf0b7, server=jenkins-hbase4.apache.org,41567,1689193085044}] 2023-07-12 20:18:13,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-12 20:18:13,278 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:13,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b7ae6f8dee8e8dc1394228d7ab5ddf20, disabling compactions & flushes 2023-07-12 20:18:13,279 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:13,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:13,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. after waiting 0 ms 2023-07-12 20:18:13,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:13,283 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:13,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e308efaca36c81f63f626f6725eb8a2d, disabling compactions & flushes 2023-07-12 20:18:13,285 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:13,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:13,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. after waiting 0 ms 2023-07-12 20:18:13,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:13,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 20:18:13,291 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20. 2023-07-12 20:18:13,292 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b7ae6f8dee8e8dc1394228d7ab5ddf20: 2023-07-12 20:18:13,294 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 20:18:13,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:13,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:13,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d. 2023-07-12 20:18:13,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 43d6730976614a44b6347298afd55d5d, disabling compactions & flushes 2023-07-12 20:18:13,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e308efaca36c81f63f626f6725eb8a2d: 2023-07-12 20:18:13,296 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:13,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:13,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. after waiting 0 ms 2023-07-12 20:18:13,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:13,296 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=b7ae6f8dee8e8dc1394228d7ab5ddf20, regionState=CLOSED 2023-07-12 20:18:13,297 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193093296"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193093296"}]},"ts":"1689193093296"} 2023-07-12 20:18:13,299 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:13,299 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:13,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 131bd4fc840a5a1afe1b095f2acbf0b7, disabling compactions & flushes 2023-07-12 20:18:13,299 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:13,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:13,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. after waiting 0 ms 2023-07-12 20:18:13,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:13,302 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=e308efaca36c81f63f626f6725eb8a2d, regionState=CLOSED 2023-07-12 20:18:13,302 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193093302"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193093302"}]},"ts":"1689193093302"} 2023-07-12 20:18:13,313 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 20:18:13,314 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d. 2023-07-12 20:18:13,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 43d6730976614a44b6347298afd55d5d: 2023-07-12 20:18:13,319 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=46 2023-07-12 20:18:13,319 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=46, state=SUCCESS; CloseRegionProcedure b7ae6f8dee8e8dc1394228d7ab5ddf20, server=jenkins-hbase4.apache.org,39187,1689193085232 in 180 msec 2023-07-12 20:18:13,319 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=47 2023-07-12 20:18:13,320 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=47, state=SUCCESS; CloseRegionProcedure e308efaca36c81f63f626f6725eb8a2d, server=jenkins-hbase4.apache.org,41567,1689193085044 in 183 msec 2023-07-12 20:18:13,320 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:13,320 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:13,321 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 20:18:13,322 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7. 2023-07-12 20:18:13,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 131bd4fc840a5a1afe1b095f2acbf0b7: 2023-07-12 20:18:13,324 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9ac77ace0ae4fbdad5bf7568a67a6af2, disabling compactions & flushes 2023-07-12 20:18:13,324 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:13,324 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:13,324 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. after waiting 0 ms 2023-07-12 20:18:13,324 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:13,324 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=43d6730976614a44b6347298afd55d5d, regionState=CLOSED 2023-07-12 20:18:13,324 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e308efaca36c81f63f626f6725eb8a2d, UNASSIGN in 213 msec 2023-07-12 20:18:13,325 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689193090764.43d6730976614a44b6347298afd55d5d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193093324"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193093324"}]},"ts":"1689193093324"} 2023-07-12 20:18:13,325 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7ae6f8dee8e8dc1394228d7ab5ddf20, UNASSIGN in 213 msec 2023-07-12 20:18:13,325 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:13,326 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=131bd4fc840a5a1afe1b095f2acbf0b7, regionState=CLOSED 2023-07-12 20:18:13,326 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193093326"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193093326"}]},"ts":"1689193093326"} 2023-07-12 20:18:13,331 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=52, resume processing ppid=48 2023-07-12 20:18:13,331 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=48, state=SUCCESS; CloseRegionProcedure 43d6730976614a44b6347298afd55d5d, server=jenkins-hbase4.apache.org,39187,1689193085232 in 207 msec 2023-07-12 20:18:13,333 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=49 2023-07-12 20:18:13,333 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; CloseRegionProcedure 131bd4fc840a5a1afe1b095f2acbf0b7, server=jenkins-hbase4.apache.org,41567,1689193085044 in 201 msec 2023-07-12 20:18:13,335 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43d6730976614a44b6347298afd55d5d, UNASSIGN in 225 msec 2023-07-12 20:18:13,336 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=131bd4fc840a5a1afe1b095f2acbf0b7, UNASSIGN in 227 msec 2023-07-12 20:18:13,339 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 20:18:13,355 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 20:18:13,356 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2. 2023-07-12 20:18:13,356 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9ac77ace0ae4fbdad5bf7568a67a6af2: 2023-07-12 20:18:13,359 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:13,360 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=9ac77ace0ae4fbdad5bf7568a67a6af2, regionState=CLOSED 2023-07-12 20:18:13,360 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193093360"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193093360"}]},"ts":"1689193093360"} 2023-07-12 20:18:13,374 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=45 2023-07-12 20:18:13,374 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=45, state=SUCCESS; CloseRegionProcedure 9ac77ace0ae4fbdad5bf7568a67a6af2, server=jenkins-hbase4.apache.org,39187,1689193085232 in 244 msec 2023-07-12 20:18:13,378 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=44 2023-07-12 20:18:13,379 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9ac77ace0ae4fbdad5bf7568a67a6af2, UNASSIGN in 268 msec 2023-07-12 20:18:13,380 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193093379"}]},"ts":"1689193093379"} 2023-07-12 20:18:13,382 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-12 20:18:13,385 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-12 20:18:13,388 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=44, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 294 msec 2023-07-12 20:18:13,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-12 20:18:13,408 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 44 completed 2023-07-12 20:18:13,409 INFO [Listener at localhost/36071] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:13,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:13,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=55, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-12 20:18:13,431 DEBUG [PEWorker-3] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-12 20:18:13,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-12 20:18:13,432 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-12 20:18:13,434 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 20:18:13,435 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-12 20:18:13,435 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 20:18:13,435 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-12 20:18:13,437 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 20:18:13,437 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-12 20:18:13,449 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:13,449 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:13,449 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:13,449 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:13,449 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:13,455 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d/recovered.edits] 2023-07-12 20:18:13,456 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d/recovered.edits] 2023-07-12 20:18:13,456 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7/recovered.edits] 2023-07-12 20:18:13,457 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20/recovered.edits] 2023-07-12 20:18:13,458 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2/recovered.edits] 2023-07-12 20:18:13,479 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7/recovered.edits/7.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7/recovered.edits/7.seqid 2023-07-12 20:18:13,480 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d/recovered.edits/7.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d/recovered.edits/7.seqid 2023-07-12 20:18:13,480 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d/recovered.edits/7.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d/recovered.edits/7.seqid 2023-07-12 20:18:13,480 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2/recovered.edits/7.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2/recovered.edits/7.seqid 2023-07-12 20:18:13,481 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20/recovered.edits/7.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20/recovered.edits/7.seqid 2023-07-12 20:18:13,482 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/131bd4fc840a5a1afe1b095f2acbf0b7 2023-07-12 20:18:13,482 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e308efaca36c81f63f626f6725eb8a2d 2023-07-12 20:18:13,483 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43d6730976614a44b6347298afd55d5d 2023-07-12 20:18:13,483 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9ac77ace0ae4fbdad5bf7568a67a6af2 2023-07-12 20:18:13,483 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7ae6f8dee8e8dc1394228d7ab5ddf20 2023-07-12 20:18:13,483 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 20:18:13,513 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-12 20:18:13,517 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-12 20:18:13,517 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-12 20:18:13,518 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193093518"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:13,518 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193093518"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:13,518 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193093518"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:13,518 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689193090764.43d6730976614a44b6347298afd55d5d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193093518"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:13,518 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193093518"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:13,520 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 20:18:13,521 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 9ac77ace0ae4fbdad5bf7568a67a6af2, NAME => 'Group_testTableMoveTruncateAndDrop,,1689193090764.9ac77ace0ae4fbdad5bf7568a67a6af2.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => b7ae6f8dee8e8dc1394228d7ab5ddf20, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689193090764.b7ae6f8dee8e8dc1394228d7ab5ddf20.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => e308efaca36c81f63f626f6725eb8a2d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193090764.e308efaca36c81f63f626f6725eb8a2d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 43d6730976614a44b6347298afd55d5d, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193090764.43d6730976614a44b6347298afd55d5d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 131bd4fc840a5a1afe1b095f2acbf0b7, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689193090764.131bd4fc840a5a1afe1b095f2acbf0b7.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 20:18:13,521 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-12 20:18:13,521 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689193093521"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:13,523 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-12 20:18:13,531 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/598121c4089b7e002b2b62bff5441089 2023-07-12 20:18:13,531 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/517fea464c62636500b5a5f9c2059014 2023-07-12 20:18:13,531 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ca2f724decb87a0304ab6c021c86599e 2023-07-12 20:18:13,531 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b9e0209f6c15c206237298c0b2af3d74 2023-07-12 20:18:13,531 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d5dd5a1ab87974741cd03478bcd4c9ab 2023-07-12 20:18:13,534 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/598121c4089b7e002b2b62bff5441089 empty. 2023-07-12 20:18:13,534 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b9e0209f6c15c206237298c0b2af3d74 empty. 2023-07-12 20:18:13,534 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ca2f724decb87a0304ab6c021c86599e empty. 2023-07-12 20:18:13,534 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d5dd5a1ab87974741cd03478bcd4c9ab empty. 2023-07-12 20:18:13,534 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/517fea464c62636500b5a5f9c2059014 empty. 2023-07-12 20:18:13,534 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/598121c4089b7e002b2b62bff5441089 2023-07-12 20:18:13,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-12 20:18:13,535 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b9e0209f6c15c206237298c0b2af3d74 2023-07-12 20:18:13,535 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d5dd5a1ab87974741cd03478bcd4c9ab 2023-07-12 20:18:13,535 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ca2f724decb87a0304ab6c021c86599e 2023-07-12 20:18:13,535 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/517fea464c62636500b5a5f9c2059014 2023-07-12 20:18:13,535 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 20:18:13,561 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:13,562 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 598121c4089b7e002b2b62bff5441089, NAME => 'Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:13,563 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => d5dd5a1ab87974741cd03478bcd4c9ab, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:13,567 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => b9e0209f6c15c206237298c0b2af3d74, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:13,611 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:13,611 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:13,611 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing d5dd5a1ab87974741cd03478bcd4c9ab, disabling compactions & flushes 2023-07-12 20:18:13,611 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 598121c4089b7e002b2b62bff5441089, disabling compactions & flushes 2023-07-12 20:18:13,611 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab. 2023-07-12 20:18:13,611 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab. 2023-07-12 20:18:13,611 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089. 2023-07-12 20:18:13,611 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab. after waiting 0 ms 2023-07-12 20:18:13,611 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab. 2023-07-12 20:18:13,611 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089. 2023-07-12 20:18:13,611 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab. 2023-07-12 20:18:13,612 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089. after waiting 0 ms 2023-07-12 20:18:13,612 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for d5dd5a1ab87974741cd03478bcd4c9ab: 2023-07-12 20:18:13,612 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089. 2023-07-12 20:18:13,612 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089. 2023-07-12 20:18:13,612 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 598121c4089b7e002b2b62bff5441089: 2023-07-12 20:18:13,612 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 517fea464c62636500b5a5f9c2059014, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:13,612 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => ca2f724decb87a0304ab6c021c86599e, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:13,649 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:13,650 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing ca2f724decb87a0304ab6c021c86599e, disabling compactions & flushes 2023-07-12 20:18:13,650 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e. 2023-07-12 20:18:13,650 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e. 2023-07-12 20:18:13,650 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e. after waiting 0 ms 2023-07-12 20:18:13,650 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e. 2023-07-12 20:18:13,650 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e. 2023-07-12 20:18:13,650 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for ca2f724decb87a0304ab6c021c86599e: 2023-07-12 20:18:13,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-12 20:18:14,031 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:14,031 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing b9e0209f6c15c206237298c0b2af3d74, disabling compactions & flushes 2023-07-12 20:18:14,031 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74. 2023-07-12 20:18:14,031 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74. 2023-07-12 20:18:14,031 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74. after waiting 0 ms 2023-07-12 20:18:14,031 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74. 2023-07-12 20:18:14,031 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74. 2023-07-12 20:18:14,031 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for b9e0209f6c15c206237298c0b2af3d74: 2023-07-12 20:18:14,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-12 20:18:14,057 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:14,057 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 517fea464c62636500b5a5f9c2059014, disabling compactions & flushes 2023-07-12 20:18:14,057 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014. 2023-07-12 20:18:14,057 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014. 2023-07-12 20:18:14,057 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014. after waiting 0 ms 2023-07-12 20:18:14,057 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014. 2023-07-12 20:18:14,057 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014. 2023-07-12 20:18:14,057 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 517fea464c62636500b5a5f9c2059014: 2023-07-12 20:18:14,062 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193094061"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193094061"}]},"ts":"1689193094061"} 2023-07-12 20:18:14,062 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193094061"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193094061"}]},"ts":"1689193094061"} 2023-07-12 20:18:14,062 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193094061"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193094061"}]},"ts":"1689193094061"} 2023-07-12 20:18:14,062 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193094061"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193094061"}]},"ts":"1689193094061"} 2023-07-12 20:18:14,062 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689193093485.517fea464c62636500b5a5f9c2059014.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193094061"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193094061"}]},"ts":"1689193094061"} 2023-07-12 20:18:14,067 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 20:18:14,068 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193094068"}]},"ts":"1689193094068"} 2023-07-12 20:18:14,070 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-12 20:18:14,075 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:14,075 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:14,075 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:14,075 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:14,076 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=598121c4089b7e002b2b62bff5441089, ASSIGN}, {pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d5dd5a1ab87974741cd03478bcd4c9ab, ASSIGN}, {pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b9e0209f6c15c206237298c0b2af3d74, ASSIGN}, {pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=517fea464c62636500b5a5f9c2059014, ASSIGN}, {pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2f724decb87a0304ab6c021c86599e, ASSIGN}] 2023-07-12 20:18:14,078 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=598121c4089b7e002b2b62bff5441089, ASSIGN 2023-07-12 20:18:14,078 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d5dd5a1ab87974741cd03478bcd4c9ab, ASSIGN 2023-07-12 20:18:14,078 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2f724decb87a0304ab6c021c86599e, ASSIGN 2023-07-12 20:18:14,079 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=517fea464c62636500b5a5f9c2059014, ASSIGN 2023-07-12 20:18:14,079 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b9e0209f6c15c206237298c0b2af3d74, ASSIGN 2023-07-12 20:18:14,080 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=598121c4089b7e002b2b62bff5441089, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39187,1689193085232; forceNewPlan=false, retain=false 2023-07-12 20:18:14,080 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d5dd5a1ab87974741cd03478bcd4c9ab, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41567,1689193085044; forceNewPlan=false, retain=false 2023-07-12 20:18:14,081 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2f724decb87a0304ab6c021c86599e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41567,1689193085044; forceNewPlan=false, retain=false 2023-07-12 20:18:14,081 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b9e0209f6c15c206237298c0b2af3d74, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39187,1689193085232; forceNewPlan=false, retain=false 2023-07-12 20:18:14,081 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=517fea464c62636500b5a5f9c2059014, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41567,1689193085044; forceNewPlan=false, retain=false 2023-07-12 20:18:14,231 INFO [jenkins-hbase4:42533] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 20:18:14,235 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=ca2f724decb87a0304ab6c021c86599e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:14,235 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193094235"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193094235"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193094235"}]},"ts":"1689193094235"} 2023-07-12 20:18:14,236 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=517fea464c62636500b5a5f9c2059014, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:14,236 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689193093485.517fea464c62636500b5a5f9c2059014.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193094236"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193094236"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193094236"}]},"ts":"1689193094236"} 2023-07-12 20:18:14,236 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=d5dd5a1ab87974741cd03478bcd4c9ab, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:14,237 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=b9e0209f6c15c206237298c0b2af3d74, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:14,237 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193094236"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193094236"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193094236"}]},"ts":"1689193094236"} 2023-07-12 20:18:14,237 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193094236"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193094236"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193094236"}]},"ts":"1689193094236"} 2023-07-12 20:18:14,237 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=598121c4089b7e002b2b62bff5441089, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:14,237 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193094237"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193094237"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193094237"}]},"ts":"1689193094237"} 2023-07-12 20:18:14,238 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE; OpenRegionProcedure ca2f724decb87a0304ab6c021c86599e, server=jenkins-hbase4.apache.org,41567,1689193085044}] 2023-07-12 20:18:14,240 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=59, state=RUNNABLE; OpenRegionProcedure 517fea464c62636500b5a5f9c2059014, server=jenkins-hbase4.apache.org,41567,1689193085044}] 2023-07-12 20:18:14,241 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=63, ppid=57, state=RUNNABLE; OpenRegionProcedure d5dd5a1ab87974741cd03478bcd4c9ab, server=jenkins-hbase4.apache.org,41567,1689193085044}] 2023-07-12 20:18:14,243 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=58, state=RUNNABLE; OpenRegionProcedure b9e0209f6c15c206237298c0b2af3d74, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:14,248 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=56, state=RUNNABLE; OpenRegionProcedure 598121c4089b7e002b2b62bff5441089, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:14,407 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab. 2023-07-12 20:18:14,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d5dd5a1ab87974741cd03478bcd4c9ab, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 20:18:14,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d5dd5a1ab87974741cd03478bcd4c9ab 2023-07-12 20:18:14,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:14,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d5dd5a1ab87974741cd03478bcd4c9ab 2023-07-12 20:18:14,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d5dd5a1ab87974741cd03478bcd4c9ab 2023-07-12 20:18:14,412 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089. 2023-07-12 20:18:14,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 598121c4089b7e002b2b62bff5441089, NAME => 'Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 20:18:14,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 598121c4089b7e002b2b62bff5441089 2023-07-12 20:18:14,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:14,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 598121c4089b7e002b2b62bff5441089 2023-07-12 20:18:14,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 598121c4089b7e002b2b62bff5441089 2023-07-12 20:18:14,417 INFO [StoreOpener-d5dd5a1ab87974741cd03478bcd4c9ab-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d5dd5a1ab87974741cd03478bcd4c9ab 2023-07-12 20:18:14,418 INFO [StoreOpener-598121c4089b7e002b2b62bff5441089-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 598121c4089b7e002b2b62bff5441089 2023-07-12 20:18:14,422 DEBUG [StoreOpener-d5dd5a1ab87974741cd03478bcd4c9ab-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/d5dd5a1ab87974741cd03478bcd4c9ab/f 2023-07-12 20:18:14,422 DEBUG [StoreOpener-d5dd5a1ab87974741cd03478bcd4c9ab-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/d5dd5a1ab87974741cd03478bcd4c9ab/f 2023-07-12 20:18:14,422 DEBUG [StoreOpener-598121c4089b7e002b2b62bff5441089-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/598121c4089b7e002b2b62bff5441089/f 2023-07-12 20:18:14,423 DEBUG [StoreOpener-598121c4089b7e002b2b62bff5441089-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/598121c4089b7e002b2b62bff5441089/f 2023-07-12 20:18:14,423 INFO [StoreOpener-d5dd5a1ab87974741cd03478bcd4c9ab-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d5dd5a1ab87974741cd03478bcd4c9ab columnFamilyName f 2023-07-12 20:18:14,424 INFO [StoreOpener-598121c4089b7e002b2b62bff5441089-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 598121c4089b7e002b2b62bff5441089 columnFamilyName f 2023-07-12 20:18:14,424 INFO [StoreOpener-d5dd5a1ab87974741cd03478bcd4c9ab-1] regionserver.HStore(310): Store=d5dd5a1ab87974741cd03478bcd4c9ab/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:14,425 INFO [StoreOpener-598121c4089b7e002b2b62bff5441089-1] regionserver.HStore(310): Store=598121c4089b7e002b2b62bff5441089/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:14,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/d5dd5a1ab87974741cd03478bcd4c9ab 2023-07-12 20:18:14,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/d5dd5a1ab87974741cd03478bcd4c9ab 2023-07-12 20:18:14,427 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/598121c4089b7e002b2b62bff5441089 2023-07-12 20:18:14,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/598121c4089b7e002b2b62bff5441089 2023-07-12 20:18:14,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d5dd5a1ab87974741cd03478bcd4c9ab 2023-07-12 20:18:14,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 598121c4089b7e002b2b62bff5441089 2023-07-12 20:18:14,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/d5dd5a1ab87974741cd03478bcd4c9ab/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:14,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/598121c4089b7e002b2b62bff5441089/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:14,449 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d5dd5a1ab87974741cd03478bcd4c9ab; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11629698240, jitterRate=0.08310005068778992}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:14,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d5dd5a1ab87974741cd03478bcd4c9ab: 2023-07-12 20:18:14,450 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 598121c4089b7e002b2b62bff5441089; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11108245600, jitterRate=0.03453598916530609}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:14,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 598121c4089b7e002b2b62bff5441089: 2023-07-12 20:18:14,451 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab., pid=63, masterSystemTime=1689193094399 2023-07-12 20:18:14,451 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089., pid=65, masterSystemTime=1689193094402 2023-07-12 20:18:14,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab. 2023-07-12 20:18:14,454 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab. 2023-07-12 20:18:14,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089. 2023-07-12 20:18:14,455 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089. 2023-07-12 20:18:14,455 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014. 2023-07-12 20:18:14,455 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74. 2023-07-12 20:18:14,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 517fea464c62636500b5a5f9c2059014, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 20:18:14,454 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=d5dd5a1ab87974741cd03478bcd4c9ab, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:14,455 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193094454"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193094454"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193094454"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193094454"}]},"ts":"1689193094454"} 2023-07-12 20:18:14,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 517fea464c62636500b5a5f9c2059014 2023-07-12 20:18:14,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:14,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b9e0209f6c15c206237298c0b2af3d74, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 20:18:14,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b9e0209f6c15c206237298c0b2af3d74 2023-07-12 20:18:14,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:14,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b9e0209f6c15c206237298c0b2af3d74 2023-07-12 20:18:14,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b9e0209f6c15c206237298c0b2af3d74 2023-07-12 20:18:14,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 517fea464c62636500b5a5f9c2059014 2023-07-12 20:18:14,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 517fea464c62636500b5a5f9c2059014 2023-07-12 20:18:14,463 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=598121c4089b7e002b2b62bff5441089, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:14,463 INFO [StoreOpener-b9e0209f6c15c206237298c0b2af3d74-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b9e0209f6c15c206237298c0b2af3d74 2023-07-12 20:18:14,463 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193094463"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193094463"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193094463"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193094463"}]},"ts":"1689193094463"} 2023-07-12 20:18:14,465 DEBUG [StoreOpener-b9e0209f6c15c206237298c0b2af3d74-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/b9e0209f6c15c206237298c0b2af3d74/f 2023-07-12 20:18:14,465 DEBUG [StoreOpener-b9e0209f6c15c206237298c0b2af3d74-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/b9e0209f6c15c206237298c0b2af3d74/f 2023-07-12 20:18:14,466 INFO [StoreOpener-b9e0209f6c15c206237298c0b2af3d74-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b9e0209f6c15c206237298c0b2af3d74 columnFamilyName f 2023-07-12 20:18:14,467 INFO [StoreOpener-b9e0209f6c15c206237298c0b2af3d74-1] regionserver.HStore(310): Store=b9e0209f6c15c206237298c0b2af3d74/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:14,470 INFO [StoreOpener-517fea464c62636500b5a5f9c2059014-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 517fea464c62636500b5a5f9c2059014 2023-07-12 20:18:14,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/b9e0209f6c15c206237298c0b2af3d74 2023-07-12 20:18:14,472 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=57 2023-07-12 20:18:14,472 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=57, state=SUCCESS; OpenRegionProcedure d5dd5a1ab87974741cd03478bcd4c9ab, server=jenkins-hbase4.apache.org,41567,1689193085044 in 220 msec 2023-07-12 20:18:14,473 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/b9e0209f6c15c206237298c0b2af3d74 2023-07-12 20:18:14,476 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d5dd5a1ab87974741cd03478bcd4c9ab, ASSIGN in 397 msec 2023-07-12 20:18:14,477 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=56 2023-07-12 20:18:14,477 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=56, state=SUCCESS; OpenRegionProcedure 598121c4089b7e002b2b62bff5441089, server=jenkins-hbase4.apache.org,39187,1689193085232 in 228 msec 2023-07-12 20:18:14,479 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=598121c4089b7e002b2b62bff5441089, ASSIGN in 401 msec 2023-07-12 20:18:14,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b9e0209f6c15c206237298c0b2af3d74 2023-07-12 20:18:14,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/b9e0209f6c15c206237298c0b2af3d74/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:14,484 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b9e0209f6c15c206237298c0b2af3d74; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11716211200, jitterRate=0.09115719795227051}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:14,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b9e0209f6c15c206237298c0b2af3d74: 2023-07-12 20:18:14,485 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74., pid=64, masterSystemTime=1689193094402 2023-07-12 20:18:14,486 DEBUG [StoreOpener-517fea464c62636500b5a5f9c2059014-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/517fea464c62636500b5a5f9c2059014/f 2023-07-12 20:18:14,486 DEBUG [StoreOpener-517fea464c62636500b5a5f9c2059014-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/517fea464c62636500b5a5f9c2059014/f 2023-07-12 20:18:14,488 INFO [StoreOpener-517fea464c62636500b5a5f9c2059014-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 517fea464c62636500b5a5f9c2059014 columnFamilyName f 2023-07-12 20:18:14,488 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74. 2023-07-12 20:18:14,489 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74. 2023-07-12 20:18:14,489 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=b9e0209f6c15c206237298c0b2af3d74, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:14,489 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193094489"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193094489"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193094489"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193094489"}]},"ts":"1689193094489"} 2023-07-12 20:18:14,489 INFO [StoreOpener-517fea464c62636500b5a5f9c2059014-1] regionserver.HStore(310): Store=517fea464c62636500b5a5f9c2059014/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:14,491 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/517fea464c62636500b5a5f9c2059014 2023-07-12 20:18:14,491 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/517fea464c62636500b5a5f9c2059014 2023-07-12 20:18:14,496 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 517fea464c62636500b5a5f9c2059014 2023-07-12 20:18:14,498 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=58 2023-07-12 20:18:14,498 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=58, state=SUCCESS; OpenRegionProcedure b9e0209f6c15c206237298c0b2af3d74, server=jenkins-hbase4.apache.org,39187,1689193085232 in 250 msec 2023-07-12 20:18:14,500 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b9e0209f6c15c206237298c0b2af3d74, ASSIGN in 422 msec 2023-07-12 20:18:14,511 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/517fea464c62636500b5a5f9c2059014/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:14,512 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 517fea464c62636500b5a5f9c2059014; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11542571360, jitterRate=0.07498572766780853}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:14,512 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 517fea464c62636500b5a5f9c2059014: 2023-07-12 20:18:14,514 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014., pid=62, masterSystemTime=1689193094399 2023-07-12 20:18:14,517 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=517fea464c62636500b5a5f9c2059014, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:14,518 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689193093485.517fea464c62636500b5a5f9c2059014.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193094517"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193094517"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193094517"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193094517"}]},"ts":"1689193094517"} 2023-07-12 20:18:14,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014. 2023-07-12 20:18:14,519 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014. 2023-07-12 20:18:14,519 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e. 2023-07-12 20:18:14,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ca2f724decb87a0304ab6c021c86599e, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 20:18:14,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ca2f724decb87a0304ab6c021c86599e 2023-07-12 20:18:14,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:14,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ca2f724decb87a0304ab6c021c86599e 2023-07-12 20:18:14,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ca2f724decb87a0304ab6c021c86599e 2023-07-12 20:18:14,524 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=59 2023-07-12 20:18:14,524 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=59, state=SUCCESS; OpenRegionProcedure 517fea464c62636500b5a5f9c2059014, server=jenkins-hbase4.apache.org,41567,1689193085044 in 281 msec 2023-07-12 20:18:14,525 INFO [StoreOpener-ca2f724decb87a0304ab6c021c86599e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ca2f724decb87a0304ab6c021c86599e 2023-07-12 20:18:14,526 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=517fea464c62636500b5a5f9c2059014, ASSIGN in 448 msec 2023-07-12 20:18:14,527 DEBUG [StoreOpener-ca2f724decb87a0304ab6c021c86599e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/ca2f724decb87a0304ab6c021c86599e/f 2023-07-12 20:18:14,527 DEBUG [StoreOpener-ca2f724decb87a0304ab6c021c86599e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/ca2f724decb87a0304ab6c021c86599e/f 2023-07-12 20:18:14,528 INFO [StoreOpener-ca2f724decb87a0304ab6c021c86599e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ca2f724decb87a0304ab6c021c86599e columnFamilyName f 2023-07-12 20:18:14,528 INFO [StoreOpener-ca2f724decb87a0304ab6c021c86599e-1] regionserver.HStore(310): Store=ca2f724decb87a0304ab6c021c86599e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:14,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/ca2f724decb87a0304ab6c021c86599e 2023-07-12 20:18:14,530 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/ca2f724decb87a0304ab6c021c86599e 2023-07-12 20:18:14,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ca2f724decb87a0304ab6c021c86599e 2023-07-12 20:18:14,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/ca2f724decb87a0304ab6c021c86599e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:14,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-12 20:18:14,541 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ca2f724decb87a0304ab6c021c86599e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10361964320, jitterRate=-0.03496687114238739}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:14,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ca2f724decb87a0304ab6c021c86599e: 2023-07-12 20:18:14,542 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e., pid=61, masterSystemTime=1689193094399 2023-07-12 20:18:14,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e. 2023-07-12 20:18:14,544 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e. 2023-07-12 20:18:14,546 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=ca2f724decb87a0304ab6c021c86599e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:14,546 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193094546"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193094546"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193094546"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193094546"}]},"ts":"1689193094546"} 2023-07-12 20:18:14,559 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=60 2023-07-12 20:18:14,559 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; OpenRegionProcedure ca2f724decb87a0304ab6c021c86599e, server=jenkins-hbase4.apache.org,41567,1689193085044 in 311 msec 2023-07-12 20:18:14,561 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=55 2023-07-12 20:18:14,562 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2f724decb87a0304ab6c021c86599e, ASSIGN in 483 msec 2023-07-12 20:18:14,562 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193094562"}]},"ts":"1689193094562"} 2023-07-12 20:18:14,564 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-12 20:18:14,566 DEBUG [PEWorker-4] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-12 20:18:14,592 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=55, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 1.1480 sec 2023-07-12 20:18:15,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-12 20:18:15,542 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 55 completed 2023-07-12 20:18:15,543 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:15,543 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:15,545 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:15,545 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:15,546 INFO [Listener at localhost/36071] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:15,546 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:15,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=66, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:15,552 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193095552"}]},"ts":"1689193095552"} 2023-07-12 20:18:15,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-12 20:18:15,554 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-12 20:18:15,557 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-12 20:18:15,559 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=598121c4089b7e002b2b62bff5441089, UNASSIGN}, {pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d5dd5a1ab87974741cd03478bcd4c9ab, UNASSIGN}, {pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b9e0209f6c15c206237298c0b2af3d74, UNASSIGN}, {pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=517fea464c62636500b5a5f9c2059014, UNASSIGN}, {pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2f724decb87a0304ab6c021c86599e, UNASSIGN}] 2023-07-12 20:18:15,561 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=598121c4089b7e002b2b62bff5441089, UNASSIGN 2023-07-12 20:18:15,561 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d5dd5a1ab87974741cd03478bcd4c9ab, UNASSIGN 2023-07-12 20:18:15,562 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2f724decb87a0304ab6c021c86599e, UNASSIGN 2023-07-12 20:18:15,562 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b9e0209f6c15c206237298c0b2af3d74, UNASSIGN 2023-07-12 20:18:15,562 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=517fea464c62636500b5a5f9c2059014, UNASSIGN 2023-07-12 20:18:15,563 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=598121c4089b7e002b2b62bff5441089, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:15,563 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=ca2f724decb87a0304ab6c021c86599e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:15,563 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=b9e0209f6c15c206237298c0b2af3d74, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:15,563 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=d5dd5a1ab87974741cd03478bcd4c9ab, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:15,563 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=517fea464c62636500b5a5f9c2059014, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:15,563 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193095563"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193095563"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193095563"}]},"ts":"1689193095563"} 2023-07-12 20:18:15,563 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193095563"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193095563"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193095563"}]},"ts":"1689193095563"} 2023-07-12 20:18:15,563 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193095563"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193095563"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193095563"}]},"ts":"1689193095563"} 2023-07-12 20:18:15,564 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689193093485.517fea464c62636500b5a5f9c2059014.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193095563"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193095563"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193095563"}]},"ts":"1689193095563"} 2023-07-12 20:18:15,563 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193095563"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193095563"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193095563"}]},"ts":"1689193095563"} 2023-07-12 20:18:15,567 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=69, state=RUNNABLE; CloseRegionProcedure b9e0209f6c15c206237298c0b2af3d74, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:15,572 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=71, state=RUNNABLE; CloseRegionProcedure ca2f724decb87a0304ab6c021c86599e, server=jenkins-hbase4.apache.org,41567,1689193085044}] 2023-07-12 20:18:15,574 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=67, state=RUNNABLE; CloseRegionProcedure 598121c4089b7e002b2b62bff5441089, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:15,575 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=70, state=RUNNABLE; CloseRegionProcedure 517fea464c62636500b5a5f9c2059014, server=jenkins-hbase4.apache.org,41567,1689193085044}] 2023-07-12 20:18:15,576 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=68, state=RUNNABLE; CloseRegionProcedure d5dd5a1ab87974741cd03478bcd4c9ab, server=jenkins-hbase4.apache.org,41567,1689193085044}] 2023-07-12 20:18:15,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-12 20:18:15,724 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 598121c4089b7e002b2b62bff5441089 2023-07-12 20:18:15,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 598121c4089b7e002b2b62bff5441089, disabling compactions & flushes 2023-07-12 20:18:15,725 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089. 2023-07-12 20:18:15,726 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089. 2023-07-12 20:18:15,726 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089. after waiting 0 ms 2023-07-12 20:18:15,726 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089. 2023-07-12 20:18:15,726 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 517fea464c62636500b5a5f9c2059014 2023-07-12 20:18:15,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 517fea464c62636500b5a5f9c2059014, disabling compactions & flushes 2023-07-12 20:18:15,727 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014. 2023-07-12 20:18:15,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014. 2023-07-12 20:18:15,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014. after waiting 0 ms 2023-07-12 20:18:15,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014. 2023-07-12 20:18:15,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/598121c4089b7e002b2b62bff5441089/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:15,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/517fea464c62636500b5a5f9c2059014/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:15,732 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089. 2023-07-12 20:18:15,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 598121c4089b7e002b2b62bff5441089: 2023-07-12 20:18:15,732 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014. 2023-07-12 20:18:15,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 517fea464c62636500b5a5f9c2059014: 2023-07-12 20:18:15,734 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 598121c4089b7e002b2b62bff5441089 2023-07-12 20:18:15,734 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b9e0209f6c15c206237298c0b2af3d74 2023-07-12 20:18:15,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b9e0209f6c15c206237298c0b2af3d74, disabling compactions & flushes 2023-07-12 20:18:15,735 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74. 2023-07-12 20:18:15,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74. 2023-07-12 20:18:15,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74. after waiting 0 ms 2023-07-12 20:18:15,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74. 2023-07-12 20:18:15,735 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=598121c4089b7e002b2b62bff5441089, regionState=CLOSED 2023-07-12 20:18:15,736 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 517fea464c62636500b5a5f9c2059014 2023-07-12 20:18:15,736 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ca2f724decb87a0304ab6c021c86599e 2023-07-12 20:18:15,736 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193095735"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193095735"}]},"ts":"1689193095735"} 2023-07-12 20:18:15,738 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=517fea464c62636500b5a5f9c2059014, regionState=CLOSED 2023-07-12 20:18:15,738 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689193093485.517fea464c62636500b5a5f9c2059014.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193095737"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193095737"}]},"ts":"1689193095737"} 2023-07-12 20:18:15,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ca2f724decb87a0304ab6c021c86599e, disabling compactions & flushes 2023-07-12 20:18:15,738 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e. 2023-07-12 20:18:15,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e. 2023-07-12 20:18:15,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e. after waiting 0 ms 2023-07-12 20:18:15,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e. 2023-07-12 20:18:15,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/b9e0209f6c15c206237298c0b2af3d74/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:15,743 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74. 2023-07-12 20:18:15,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b9e0209f6c15c206237298c0b2af3d74: 2023-07-12 20:18:15,745 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/ca2f724decb87a0304ab6c021c86599e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:15,746 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e. 2023-07-12 20:18:15,746 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ca2f724decb87a0304ab6c021c86599e: 2023-07-12 20:18:15,747 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b9e0209f6c15c206237298c0b2af3d74 2023-07-12 20:18:15,748 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=67 2023-07-12 20:18:15,748 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=b9e0209f6c15c206237298c0b2af3d74, regionState=CLOSED 2023-07-12 20:18:15,748 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=67, state=SUCCESS; CloseRegionProcedure 598121c4089b7e002b2b62bff5441089, server=jenkins-hbase4.apache.org,39187,1689193085232 in 165 msec 2023-07-12 20:18:15,748 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193095748"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193095748"}]},"ts":"1689193095748"} 2023-07-12 20:18:15,750 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=70 2023-07-12 20:18:15,750 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=ca2f724decb87a0304ab6c021c86599e, regionState=CLOSED 2023-07-12 20:18:15,751 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=70, state=SUCCESS; CloseRegionProcedure 517fea464c62636500b5a5f9c2059014, server=jenkins-hbase4.apache.org,41567,1689193085044 in 165 msec 2023-07-12 20:18:15,751 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689193095750"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193095750"}]},"ts":"1689193095750"} 2023-07-12 20:18:15,752 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ca2f724decb87a0304ab6c021c86599e 2023-07-12 20:18:15,752 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d5dd5a1ab87974741cd03478bcd4c9ab 2023-07-12 20:18:15,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d5dd5a1ab87974741cd03478bcd4c9ab, disabling compactions & flushes 2023-07-12 20:18:15,753 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab. 2023-07-12 20:18:15,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab. 2023-07-12 20:18:15,753 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=598121c4089b7e002b2b62bff5441089, UNASSIGN in 189 msec 2023-07-12 20:18:15,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab. after waiting 0 ms 2023-07-12 20:18:15,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab. 2023-07-12 20:18:15,754 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=517fea464c62636500b5a5f9c2059014, UNASSIGN in 192 msec 2023-07-12 20:18:15,754 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=69 2023-07-12 20:18:15,754 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=69, state=SUCCESS; CloseRegionProcedure b9e0209f6c15c206237298c0b2af3d74, server=jenkins-hbase4.apache.org,39187,1689193085232 in 183 msec 2023-07-12 20:18:15,757 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=71 2023-07-12 20:18:15,757 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=71, state=SUCCESS; CloseRegionProcedure ca2f724decb87a0304ab6c021c86599e, server=jenkins-hbase4.apache.org,41567,1689193085044 in 181 msec 2023-07-12 20:18:15,757 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b9e0209f6c15c206237298c0b2af3d74, UNASSIGN in 195 msec 2023-07-12 20:18:15,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testTableMoveTruncateAndDrop/d5dd5a1ab87974741cd03478bcd4c9ab/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:15,759 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab. 2023-07-12 20:18:15,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d5dd5a1ab87974741cd03478bcd4c9ab: 2023-07-12 20:18:15,759 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ca2f724decb87a0304ab6c021c86599e, UNASSIGN in 198 msec 2023-07-12 20:18:15,760 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d5dd5a1ab87974741cd03478bcd4c9ab 2023-07-12 20:18:15,760 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=d5dd5a1ab87974741cd03478bcd4c9ab, regionState=CLOSED 2023-07-12 20:18:15,761 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689193095760"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193095760"}]},"ts":"1689193095760"} 2023-07-12 20:18:15,766 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=68 2023-07-12 20:18:15,766 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=68, state=SUCCESS; CloseRegionProcedure d5dd5a1ab87974741cd03478bcd4c9ab, server=jenkins-hbase4.apache.org,41567,1689193085044 in 186 msec 2023-07-12 20:18:15,769 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=66 2023-07-12 20:18:15,769 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d5dd5a1ab87974741cd03478bcd4c9ab, UNASSIGN in 207 msec 2023-07-12 20:18:15,770 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193095770"}]},"ts":"1689193095770"} 2023-07-12 20:18:15,772 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-12 20:18:15,774 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-12 20:18:15,777 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=66, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 228 msec 2023-07-12 20:18:15,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-12 20:18:15,857 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 66 completed 2023-07-12 20:18:15,863 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:15,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:15,884 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:15,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_409149434' 2023-07-12 20:18:15,887 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=77, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:15,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:15,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:15,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:15,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:15,905 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/598121c4089b7e002b2b62bff5441089 2023-07-12 20:18:15,905 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/517fea464c62636500b5a5f9c2059014 2023-07-12 20:18:15,905 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b9e0209f6c15c206237298c0b2af3d74 2023-07-12 20:18:15,905 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d5dd5a1ab87974741cd03478bcd4c9ab 2023-07-12 20:18:15,906 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ca2f724decb87a0304ab6c021c86599e 2023-07-12 20:18:15,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-12 20:18:15,911 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ca2f724decb87a0304ab6c021c86599e/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ca2f724decb87a0304ab6c021c86599e/recovered.edits] 2023-07-12 20:18:15,912 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/598121c4089b7e002b2b62bff5441089/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/598121c4089b7e002b2b62bff5441089/recovered.edits] 2023-07-12 20:18:15,913 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/517fea464c62636500b5a5f9c2059014/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/517fea464c62636500b5a5f9c2059014/recovered.edits] 2023-07-12 20:18:15,913 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b9e0209f6c15c206237298c0b2af3d74/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b9e0209f6c15c206237298c0b2af3d74/recovered.edits] 2023-07-12 20:18:15,914 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d5dd5a1ab87974741cd03478bcd4c9ab/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d5dd5a1ab87974741cd03478bcd4c9ab/recovered.edits] 2023-07-12 20:18:15,925 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ca2f724decb87a0304ab6c021c86599e/recovered.edits/4.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/Group_testTableMoveTruncateAndDrop/ca2f724decb87a0304ab6c021c86599e/recovered.edits/4.seqid 2023-07-12 20:18:15,927 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ca2f724decb87a0304ab6c021c86599e 2023-07-12 20:18:15,929 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/517fea464c62636500b5a5f9c2059014/recovered.edits/4.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/Group_testTableMoveTruncateAndDrop/517fea464c62636500b5a5f9c2059014/recovered.edits/4.seqid 2023-07-12 20:18:15,929 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d5dd5a1ab87974741cd03478bcd4c9ab/recovered.edits/4.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/Group_testTableMoveTruncateAndDrop/d5dd5a1ab87974741cd03478bcd4c9ab/recovered.edits/4.seqid 2023-07-12 20:18:15,930 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b9e0209f6c15c206237298c0b2af3d74/recovered.edits/4.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/Group_testTableMoveTruncateAndDrop/b9e0209f6c15c206237298c0b2af3d74/recovered.edits/4.seqid 2023-07-12 20:18:15,930 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/517fea464c62636500b5a5f9c2059014 2023-07-12 20:18:15,930 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d5dd5a1ab87974741cd03478bcd4c9ab 2023-07-12 20:18:15,930 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/598121c4089b7e002b2b62bff5441089/recovered.edits/4.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/Group_testTableMoveTruncateAndDrop/598121c4089b7e002b2b62bff5441089/recovered.edits/4.seqid 2023-07-12 20:18:15,931 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b9e0209f6c15c206237298c0b2af3d74 2023-07-12 20:18:15,931 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testTableMoveTruncateAndDrop/598121c4089b7e002b2b62bff5441089 2023-07-12 20:18:15,931 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 20:18:15,936 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=77, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:15,943 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-12 20:18:15,946 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-12 20:18:15,948 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=77, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:15,948 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-12 20:18:15,948 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193095948"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:15,948 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193095948"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:15,948 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193095948"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:15,948 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689193093485.517fea464c62636500b5a5f9c2059014.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193095948"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:15,948 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193095948"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:15,955 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 20:18:15,955 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 598121c4089b7e002b2b62bff5441089, NAME => 'Group_testTableMoveTruncateAndDrop,,1689193093485.598121c4089b7e002b2b62bff5441089.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => d5dd5a1ab87974741cd03478bcd4c9ab, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689193093485.d5dd5a1ab87974741cd03478bcd4c9ab.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => b9e0209f6c15c206237298c0b2af3d74, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689193093485.b9e0209f6c15c206237298c0b2af3d74.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 517fea464c62636500b5a5f9c2059014, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689193093485.517fea464c62636500b5a5f9c2059014.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => ca2f724decb87a0304ab6c021c86599e, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689193093485.ca2f724decb87a0304ab6c021c86599e.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 20:18:15,956 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-12 20:18:15,956 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689193095956"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:15,958 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-12 20:18:15,960 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=77, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 20:18:15,963 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=77, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 89 msec 2023-07-12 20:18:16,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-12 20:18:16,011 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 77 completed 2023-07-12 20:18:16,013 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:16,013 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:16,016 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39187] ipc.CallRunner(144): callId: 163 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:48396 deadline: 1689193156016, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43429 startCode=1689193089109. As of locationSeqNum=6. 2023-07-12 20:18:16,127 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:16,128 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:16,129 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:16,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:16,129 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:16,130 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567] to rsgroup default 2023-07-12 20:18:16,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:16,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:16,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:16,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:16,136 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_409149434, current retry=0 2023-07-12 20:18:16,137 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39187,1689193085232, jenkins-hbase4.apache.org,41567,1689193085044] are moved back to Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:16,137 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_409149434 => default 2023-07-12 20:18:16,137 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:16,144 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_409149434 2023-07-12 20:18:16,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:16,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:16,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 20:18:16,155 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:16,157 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:16,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:16,157 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:16,158 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:16,158 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:16,159 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:16,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:16,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:16,168 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:16,172 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:16,173 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:16,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:16,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:16,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:16,180 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:16,186 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:16,186 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:16,188 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42533] to rsgroup master 2023-07-12 20:18:16,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:16,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 148 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46566 deadline: 1689194296188, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. 2023-07-12 20:18:16,189 WARN [Listener at localhost/36071] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:16,192 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:16,194 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:16,195 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:16,195 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:46283], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:16,196 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:16,196 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:16,230 INFO [Listener at localhost/36071] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=496 (was 423) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x292363c-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-64f1e167-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x292363c-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp532866249-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43429 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_746229994_17 at /127.0.0.1:52004 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_130539887_17 at /127.0.0.1:50780 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-600225254-172.31.14.131-1689193079268:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43429 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1777517396_17 at /127.0.0.1:52016 [Receiving block BP-600225254-172.31.14.131-1689193079268:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43429 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp532866249-636-acceptor-0@37e88993-ServerConnector@1d8aa3aa{HTTP/1.1, (http/1.1)}{0.0.0.0:36787} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp532866249-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43429 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43429 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:41485 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_130539887_17 at /127.0.0.1:50764 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_978309808_17 at /127.0.0.1:49574 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1777517396_17 at /127.0.0.1:50736 [Receiving block BP-600225254-172.31.14.131-1689193079268:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x292363c-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5275ffcd-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43429 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43429 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x292363c-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp532866249-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43429 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:43429Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:43429-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp532866249-635 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x292363c-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x292363c-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:43429 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp532866249-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp532866249-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43429 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51228@0x5663bd11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/230090295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-600225254-172.31.14.131-1689193079268:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:41485 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1777517396_17 at /127.0.0.1:38700 [Receiving block BP-600225254-172.31.14.131-1689193079268:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp532866249-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43429 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-600225254-172.31.14.131-1689193079268:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6-prefix:jenkins-hbase4.apache.org,43429,1689193089109 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51228@0x5663bd11-SendThread(127.0.0.1:51228) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51228@0x5663bd11-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_746229994_17 at /127.0.0.1:52034 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=796 (was 676) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=589 (was 589), ProcessCount=172 (was 172), AvailableMemoryMB=4686 (was 5079) 2023-07-12 20:18:16,254 INFO [Listener at localhost/36071] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=496, OpenFileDescriptor=796, MaxFileDescriptor=60000, SystemLoadAverage=589, ProcessCount=172, AvailableMemoryMB=4683 2023-07-12 20:18:16,254 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-12 20:18:16,267 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:16,267 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:16,269 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:16,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:16,269 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:16,270 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:16,270 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:16,271 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:16,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:16,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:16,279 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:16,283 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:16,284 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:16,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:16,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:16,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:16,295 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:16,299 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:16,299 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:16,303 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42533] to rsgroup master 2023-07-12 20:18:16,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:16,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 176 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46566 deadline: 1689194296303, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. 2023-07-12 20:18:16,303 WARN [Listener at localhost/36071] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:16,305 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:16,307 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:16,307 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:16,307 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:46283], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:16,308 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:16,308 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:16,310 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-12 20:18:16,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:16,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:46566 deadline: 1689194296310, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 20:18:16,312 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-12 20:18:16,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:16,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:46566 deadline: 1689194296312, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 20:18:16,314 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-12 20:18:16,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:16,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 186 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:46566 deadline: 1689194296314, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 20:18:16,318 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-12 20:18:16,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-12 20:18:16,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:16,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:16,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:16,328 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:16,332 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:16,332 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:16,342 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:16,343 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:16,344 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:16,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:16,344 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:16,346 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:16,346 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:16,347 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-12 20:18:16,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:16,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:16,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 20:18:16,355 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:16,356 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:16,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:16,356 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:16,357 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:16,357 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:16,359 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:16,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:16,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:16,366 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:16,371 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:16,372 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:16,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:16,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:16,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:16,383 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:16,390 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:16,390 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:16,393 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42533] to rsgroup master 2023-07-12 20:18:16,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:16,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 220 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46566 deadline: 1689194296392, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. 2023-07-12 20:18:16,394 WARN [Listener at localhost/36071] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:16,395 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:16,396 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:16,396 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:16,397 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:46283], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:16,397 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:16,398 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:16,417 INFO [Listener at localhost/36071] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=499 (was 496) Potentially hanging thread: hconnection-0x5275ffcd-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5275ffcd-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5275ffcd-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=796 (was 796), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=589 (was 589), ProcessCount=172 (was 172), AvailableMemoryMB=4673 (was 4683) 2023-07-12 20:18:16,436 INFO [Listener at localhost/36071] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=499, OpenFileDescriptor=796, MaxFileDescriptor=60000, SystemLoadAverage=589, ProcessCount=172, AvailableMemoryMB=4672 2023-07-12 20:18:16,436 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-12 20:18:16,441 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:16,441 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:16,442 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:16,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:16,442 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:16,443 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:16,443 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:16,444 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:16,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:16,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:16,452 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:16,455 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:16,456 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:16,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:16,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:16,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:16,463 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:16,469 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:16,469 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:16,471 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42533] to rsgroup master 2023-07-12 20:18:16,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:16,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 248 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46566 deadline: 1689194296471, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. 2023-07-12 20:18:16,472 WARN [Listener at localhost/36071] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:16,473 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:16,474 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:16,474 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:16,475 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:46283], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:16,475 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:16,476 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:16,477 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:16,477 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:16,478 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:16,478 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:16,479 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-12 20:18:16,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:16,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 20:18:16,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:16,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:16,486 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:16,489 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:16,489 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:16,492 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:41567] to rsgroup bar 2023-07-12 20:18:16,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:16,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 20:18:16,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:16,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:16,498 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(238): Moving server region aa1db639fdc668f9efd7f5e68d620495, which do not belong to RSGroup bar 2023-07-12 20:18:16,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=aa1db639fdc668f9efd7f5e68d620495, REOPEN/MOVE 2023-07-12 20:18:16,499 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(238): Moving server region 455649b011ddbbda985bd47060a43b64, which do not belong to RSGroup bar 2023-07-12 20:18:16,500 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=aa1db639fdc668f9efd7f5e68d620495, REOPEN/MOVE 2023-07-12 20:18:16,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=79, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=455649b011ddbbda985bd47060a43b64, REOPEN/MOVE 2023-07-12 20:18:16,500 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=aa1db639fdc668f9efd7f5e68d620495, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:16,501 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=79, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=455649b011ddbbda985bd47060a43b64, REOPEN/MOVE 2023-07-12 20:18:16,500 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-12 20:18:16,501 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689193096500"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193096500"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193096500"}]},"ts":"1689193096500"} 2023-07-12 20:18:16,502 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=455649b011ddbbda985bd47060a43b64, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:16,502 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689193096502"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193096502"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193096502"}]},"ts":"1689193096502"} 2023-07-12 20:18:16,503 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=78, state=RUNNABLE; CloseRegionProcedure aa1db639fdc668f9efd7f5e68d620495, server=jenkins-hbase4.apache.org,43429,1689193089109}] 2023-07-12 20:18:16,504 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=81, ppid=79, state=RUNNABLE; CloseRegionProcedure 455649b011ddbbda985bd47060a43b64, server=jenkins-hbase4.apache.org,43429,1689193089109}] 2023-07-12 20:18:16,656 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:16,657 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing aa1db639fdc668f9efd7f5e68d620495, disabling compactions & flushes 2023-07-12 20:18:16,658 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:16,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:16,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. after waiting 0 ms 2023-07-12 20:18:16,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:16,658 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing aa1db639fdc668f9efd7f5e68d620495 1/1 column families, dataSize=4.98 KB heapSize=8.39 KB 2023-07-12 20:18:16,685 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.98 KB at sequenceid=32 (bloomFilter=true), to=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/.tmp/m/41db8cb6a739499997e7c30c9804231f 2023-07-12 20:18:16,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 41db8cb6a739499997e7c30c9804231f 2023-07-12 20:18:16,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/.tmp/m/41db8cb6a739499997e7c30c9804231f as hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/m/41db8cb6a739499997e7c30c9804231f 2023-07-12 20:18:16,712 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 41db8cb6a739499997e7c30c9804231f 2023-07-12 20:18:16,712 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/m/41db8cb6a739499997e7c30c9804231f, entries=9, sequenceid=32, filesize=5.5 K 2023-07-12 20:18:16,715 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.98 KB/5100, heapSize ~8.38 KB/8576, currentSize=0 B/0 for aa1db639fdc668f9efd7f5e68d620495 in 57ms, sequenceid=32, compaction requested=false 2023-07-12 20:18:16,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/recovered.edits/35.seqid, newMaxSeqId=35, maxSeqId=12 2023-07-12 20:18:16,736 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 20:18:16,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:16,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for aa1db639fdc668f9efd7f5e68d620495: 2023-07-12 20:18:16,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding aa1db639fdc668f9efd7f5e68d620495 move to jenkins-hbase4.apache.org,46283,1689193085424 record at close sequenceid=32 2023-07-12 20:18:16,739 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:16,739 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:16,741 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=aa1db639fdc668f9efd7f5e68d620495, regionState=CLOSED 2023-07-12 20:18:16,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 455649b011ddbbda985bd47060a43b64, disabling compactions & flushes 2023-07-12 20:18:16,742 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:16,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:16,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. after waiting 0 ms 2023-07-12 20:18:16,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:16,744 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689193096741"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193096741"}]},"ts":"1689193096741"} 2023-07-12 20:18:16,749 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=78 2023-07-12 20:18:16,749 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=78, state=SUCCESS; CloseRegionProcedure aa1db639fdc668f9efd7f5e68d620495, server=jenkins-hbase4.apache.org,43429,1689193089109 in 243 msec 2023-07-12 20:18:16,750 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=aa1db639fdc668f9efd7f5e68d620495, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46283,1689193085424; forceNewPlan=false, retain=false 2023-07-12 20:18:16,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-12 20:18:16,764 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:16,764 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 455649b011ddbbda985bd47060a43b64: 2023-07-12 20:18:16,764 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 455649b011ddbbda985bd47060a43b64 move to jenkins-hbase4.apache.org,46283,1689193085424 record at close sequenceid=10 2023-07-12 20:18:16,766 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:16,767 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=455649b011ddbbda985bd47060a43b64, regionState=CLOSED 2023-07-12 20:18:16,767 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689193096767"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193096767"}]},"ts":"1689193096767"} 2023-07-12 20:18:16,787 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=81, resume processing ppid=79 2023-07-12 20:18:16,787 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=81, ppid=79, state=SUCCESS; CloseRegionProcedure 455649b011ddbbda985bd47060a43b64, server=jenkins-hbase4.apache.org,43429,1689193089109 in 278 msec 2023-07-12 20:18:16,788 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=79, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=455649b011ddbbda985bd47060a43b64, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46283,1689193085424; forceNewPlan=false, retain=false 2023-07-12 20:18:16,789 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=aa1db639fdc668f9efd7f5e68d620495, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:16,789 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689193096789"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193096789"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193096789"}]},"ts":"1689193096789"} 2023-07-12 20:18:16,791 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=455649b011ddbbda985bd47060a43b64, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:16,791 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689193096791"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193096791"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193096791"}]},"ts":"1689193096791"} 2023-07-12 20:18:16,792 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=78, state=RUNNABLE; OpenRegionProcedure aa1db639fdc668f9efd7f5e68d620495, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:16,793 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=79, state=RUNNABLE; OpenRegionProcedure 455649b011ddbbda985bd47060a43b64, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:16,955 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:16,955 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 455649b011ddbbda985bd47060a43b64, NAME => 'hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:16,955 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:16,955 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:16,955 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:16,956 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:16,962 INFO [StoreOpener-455649b011ddbbda985bd47060a43b64-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:16,968 DEBUG [StoreOpener-455649b011ddbbda985bd47060a43b64-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64/info 2023-07-12 20:18:16,968 DEBUG [StoreOpener-455649b011ddbbda985bd47060a43b64-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64/info 2023-07-12 20:18:16,968 INFO [StoreOpener-455649b011ddbbda985bd47060a43b64-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 455649b011ddbbda985bd47060a43b64 columnFamilyName info 2023-07-12 20:18:16,986 DEBUG [StoreOpener-455649b011ddbbda985bd47060a43b64-1] regionserver.HStore(539): loaded hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64/info/fbf3c9b199b34ae0843ec8d79454096d 2023-07-12 20:18:16,986 INFO [StoreOpener-455649b011ddbbda985bd47060a43b64-1] regionserver.HStore(310): Store=455649b011ddbbda985bd47060a43b64/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:16,987 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:16,988 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:16,992 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:16,993 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 455649b011ddbbda985bd47060a43b64; next sequenceid=13; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10036665280, jitterRate=-0.06526270508766174}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:16,993 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 455649b011ddbbda985bd47060a43b64: 2023-07-12 20:18:16,994 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64., pid=83, masterSystemTime=1689193096948 2023-07-12 20:18:16,995 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:16,995 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:16,996 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:16,996 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aa1db639fdc668f9efd7f5e68d620495, NAME => 'hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:16,996 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=455649b011ddbbda985bd47060a43b64, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:16,996 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 20:18:16,996 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. service=MultiRowMutationService 2023-07-12 20:18:16,996 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689193096996"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193096996"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193096996"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193096996"}]},"ts":"1689193096996"} 2023-07-12 20:18:16,996 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 20:18:16,996 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:16,996 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:16,996 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:16,997 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:16,998 INFO [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:17,000 DEBUG [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/m 2023-07-12 20:18:17,000 DEBUG [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/m 2023-07-12 20:18:17,000 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=79 2023-07-12 20:18:17,000 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=79, state=SUCCESS; OpenRegionProcedure 455649b011ddbbda985bd47060a43b64, server=jenkins-hbase4.apache.org,46283,1689193085424 in 205 msec 2023-07-12 20:18:17,000 INFO [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aa1db639fdc668f9efd7f5e68d620495 columnFamilyName m 2023-07-12 20:18:17,008 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=79, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=455649b011ddbbda985bd47060a43b64, REOPEN/MOVE in 500 msec 2023-07-12 20:18:17,016 DEBUG [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] regionserver.HStore(539): loaded hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/m/1d9a0266a7c54d468008bb4fa2345577 2023-07-12 20:18:17,026 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 41db8cb6a739499997e7c30c9804231f 2023-07-12 20:18:17,026 DEBUG [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] regionserver.HStore(539): loaded hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/m/41db8cb6a739499997e7c30c9804231f 2023-07-12 20:18:17,026 INFO [StoreOpener-aa1db639fdc668f9efd7f5e68d620495-1] regionserver.HStore(310): Store=aa1db639fdc668f9efd7f5e68d620495/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:17,031 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:17,033 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:17,037 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:17,039 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened aa1db639fdc668f9efd7f5e68d620495; next sequenceid=36; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@e2ad83, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:17,039 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for aa1db639fdc668f9efd7f5e68d620495: 2023-07-12 20:18:17,040 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495., pid=82, masterSystemTime=1689193096948 2023-07-12 20:18:17,041 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:17,041 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:17,042 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=aa1db639fdc668f9efd7f5e68d620495, regionState=OPEN, openSeqNum=36, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:17,042 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689193097042"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193097042"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193097042"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193097042"}]},"ts":"1689193097042"} 2023-07-12 20:18:17,046 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=78 2023-07-12 20:18:17,046 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=78, state=SUCCESS; OpenRegionProcedure aa1db639fdc668f9efd7f5e68d620495, server=jenkins-hbase4.apache.org,46283,1689193085424 in 252 msec 2023-07-12 20:18:17,048 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=aa1db639fdc668f9efd7f5e68d620495, REOPEN/MOVE in 548 msec 2023-07-12 20:18:17,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-12 20:18:17,502 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39187,1689193085232, jenkins-hbase4.apache.org,41567,1689193085044, jenkins-hbase4.apache.org,43429,1689193089109] are moved back to default 2023-07-12 20:18:17,502 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-12 20:18:17,502 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:17,503 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43429] ipc.CallRunner(144): callId: 13 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:58598 deadline: 1689193157502, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46283 startCode=1689193085424. As of locationSeqNum=32. 2023-07-12 20:18:17,615 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:17,616 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:17,618 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-12 20:18:17,619 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:17,621 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:17,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-12 20:18:17,624 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:17,624 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 84 2023-07-12 20:18:17,625 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43429] ipc.CallRunner(144): callId: 199 service: ClientService methodName: ExecService size: 528 connection: 172.31.14.131:58612 deadline: 1689193157625, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46283 startCode=1689193085424. As of locationSeqNum=32. 2023-07-12 20:18:17,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-12 20:18:17,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-12 20:18:17,729 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:17,729 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 20:18:17,730 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:17,730 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:17,735 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 20:18:17,737 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:17,738 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360 empty. 2023-07-12 20:18:17,738 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:17,738 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-12 20:18:17,754 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:17,755 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => b3a58a5fa5cb3e8978c12c087568c360, NAME => 'Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:17,765 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:17,765 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing b3a58a5fa5cb3e8978c12c087568c360, disabling compactions & flushes 2023-07-12 20:18:17,765 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:17,765 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:17,765 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. after waiting 0 ms 2023-07-12 20:18:17,765 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:17,765 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:17,765 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for b3a58a5fa5cb3e8978c12c087568c360: 2023-07-12 20:18:17,768 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 20:18:17,768 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689193097768"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193097768"}]},"ts":"1689193097768"} 2023-07-12 20:18:17,770 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 20:18:17,771 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 20:18:17,771 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193097771"}]},"ts":"1689193097771"} 2023-07-12 20:18:17,772 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-12 20:18:17,776 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b3a58a5fa5cb3e8978c12c087568c360, ASSIGN}] 2023-07-12 20:18:17,779 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b3a58a5fa5cb3e8978c12c087568c360, ASSIGN 2023-07-12 20:18:17,780 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b3a58a5fa5cb3e8978c12c087568c360, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46283,1689193085424; forceNewPlan=false, retain=false 2023-07-12 20:18:17,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-12 20:18:17,932 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=b3a58a5fa5cb3e8978c12c087568c360, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:17,932 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689193097932"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193097932"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193097932"}]},"ts":"1689193097932"} 2023-07-12 20:18:17,934 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE; OpenRegionProcedure b3a58a5fa5cb3e8978c12c087568c360, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:18,093 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:18,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b3a58a5fa5cb3e8978c12c087568c360, NAME => 'Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:18,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:18,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:18,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:18,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:18,098 INFO [StoreOpener-b3a58a5fa5cb3e8978c12c087568c360-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:18,100 DEBUG [StoreOpener-b3a58a5fa5cb3e8978c12c087568c360-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360/f 2023-07-12 20:18:18,100 DEBUG [StoreOpener-b3a58a5fa5cb3e8978c12c087568c360-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360/f 2023-07-12 20:18:18,101 INFO [StoreOpener-b3a58a5fa5cb3e8978c12c087568c360-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b3a58a5fa5cb3e8978c12c087568c360 columnFamilyName f 2023-07-12 20:18:18,101 INFO [StoreOpener-b3a58a5fa5cb3e8978c12c087568c360-1] regionserver.HStore(310): Store=b3a58a5fa5cb3e8978c12c087568c360/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:18,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:18,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:18,108 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:18,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:18,113 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b3a58a5fa5cb3e8978c12c087568c360; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9801836480, jitterRate=-0.08713284134864807}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:18,113 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b3a58a5fa5cb3e8978c12c087568c360: 2023-07-12 20:18:18,114 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360., pid=86, masterSystemTime=1689193098089 2023-07-12 20:18:18,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:18,116 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:18,117 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=b3a58a5fa5cb3e8978c12c087568c360, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:18,117 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689193098117"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193098117"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193098117"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193098117"}]},"ts":"1689193098117"} 2023-07-12 20:18:18,120 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-12 20:18:18,120 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; OpenRegionProcedure b3a58a5fa5cb3e8978c12c087568c360, server=jenkins-hbase4.apache.org,46283,1689193085424 in 185 msec 2023-07-12 20:18:18,125 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-12 20:18:18,126 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b3a58a5fa5cb3e8978c12c087568c360, ASSIGN in 344 msec 2023-07-12 20:18:18,126 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 20:18:18,126 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193098126"}]},"ts":"1689193098126"} 2023-07-12 20:18:18,128 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-12 20:18:18,130 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 20:18:18,132 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 509 msec 2023-07-12 20:18:18,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-12 20:18:18,229 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-12 20:18:18,229 DEBUG [Listener at localhost/36071] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-12 20:18:18,230 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:18,238 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-12 20:18:18,239 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:18,239 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-12 20:18:18,243 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-12 20:18:18,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:18,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 20:18:18,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:18,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:18,254 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-12 20:18:18,255 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(345): Moving region b3a58a5fa5cb3e8978c12c087568c360 to RSGroup bar 2023-07-12 20:18:18,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:18,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:18,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:18,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:18,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 20:18:18,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:18,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b3a58a5fa5cb3e8978c12c087568c360, REOPEN/MOVE 2023-07-12 20:18:18,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-12 20:18:18,259 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b3a58a5fa5cb3e8978c12c087568c360, REOPEN/MOVE 2023-07-12 20:18:18,261 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=b3a58a5fa5cb3e8978c12c087568c360, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:18,262 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689193098261"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193098261"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193098261"}]},"ts":"1689193098261"} 2023-07-12 20:18:18,267 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure b3a58a5fa5cb3e8978c12c087568c360, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:18,423 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:18,427 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b3a58a5fa5cb3e8978c12c087568c360, disabling compactions & flushes 2023-07-12 20:18:18,428 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:18,428 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:18,428 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. after waiting 0 ms 2023-07-12 20:18:18,428 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:18,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:18,436 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:18,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b3a58a5fa5cb3e8978c12c087568c360: 2023-07-12 20:18:18,436 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b3a58a5fa5cb3e8978c12c087568c360 move to jenkins-hbase4.apache.org,39187,1689193085232 record at close sequenceid=2 2023-07-12 20:18:18,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:18,439 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=b3a58a5fa5cb3e8978c12c087568c360, regionState=CLOSED 2023-07-12 20:18:18,439 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689193098439"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193098439"}]},"ts":"1689193098439"} 2023-07-12 20:18:18,453 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-12 20:18:18,453 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure b3a58a5fa5cb3e8978c12c087568c360, server=jenkins-hbase4.apache.org,46283,1689193085424 in 187 msec 2023-07-12 20:18:18,454 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b3a58a5fa5cb3e8978c12c087568c360, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39187,1689193085232; forceNewPlan=false, retain=false 2023-07-12 20:18:18,604 INFO [jenkins-hbase4:42533] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 20:18:18,605 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=b3a58a5fa5cb3e8978c12c087568c360, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:18,605 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689193098605"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193098605"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193098605"}]},"ts":"1689193098605"} 2023-07-12 20:18:18,607 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure b3a58a5fa5cb3e8978c12c087568c360, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:18,763 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:18,763 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b3a58a5fa5cb3e8978c12c087568c360, NAME => 'Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:18,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:18,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:18,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:18,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:18,766 INFO [StoreOpener-b3a58a5fa5cb3e8978c12c087568c360-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:18,767 DEBUG [StoreOpener-b3a58a5fa5cb3e8978c12c087568c360-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360/f 2023-07-12 20:18:18,767 DEBUG [StoreOpener-b3a58a5fa5cb3e8978c12c087568c360-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360/f 2023-07-12 20:18:18,768 INFO [StoreOpener-b3a58a5fa5cb3e8978c12c087568c360-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b3a58a5fa5cb3e8978c12c087568c360 columnFamilyName f 2023-07-12 20:18:18,768 INFO [StoreOpener-b3a58a5fa5cb3e8978c12c087568c360-1] regionserver.HStore(310): Store=b3a58a5fa5cb3e8978c12c087568c360/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:18,769 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:18,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:18,776 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:18,777 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b3a58a5fa5cb3e8978c12c087568c360; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10877125920, jitterRate=0.01301129162311554}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:18,777 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b3a58a5fa5cb3e8978c12c087568c360: 2023-07-12 20:18:18,778 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360., pid=89, masterSystemTime=1689193098759 2023-07-12 20:18:18,780 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:18,780 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:18,780 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=b3a58a5fa5cb3e8978c12c087568c360, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:18,780 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689193098780"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193098780"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193098780"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193098780"}]},"ts":"1689193098780"} 2023-07-12 20:18:18,784 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-12 20:18:18,784 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure b3a58a5fa5cb3e8978c12c087568c360, server=jenkins-hbase4.apache.org,39187,1689193085232 in 175 msec 2023-07-12 20:18:18,785 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b3a58a5fa5cb3e8978c12c087568c360, REOPEN/MOVE in 529 msec 2023-07-12 20:18:18,794 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 20:18:19,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-12 20:18:19,259 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-12 20:18:19,259 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:19,263 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:19,263 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:19,265 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-12 20:18:19,265 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:19,266 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-12 20:18:19,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:19,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 286 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:46566 deadline: 1689194299266, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-12 20:18:19,268 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:41567] to rsgroup default 2023-07-12 20:18:19,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:19,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 288 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:46566 deadline: 1689194299268, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-12 20:18:19,270 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-12 20:18:19,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:19,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 20:18:19,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:19,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:19,275 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-12 20:18:19,275 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(345): Moving region b3a58a5fa5cb3e8978c12c087568c360 to RSGroup default 2023-07-12 20:18:19,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b3a58a5fa5cb3e8978c12c087568c360, REOPEN/MOVE 2023-07-12 20:18:19,276 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 20:18:19,277 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b3a58a5fa5cb3e8978c12c087568c360, REOPEN/MOVE 2023-07-12 20:18:19,277 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=b3a58a5fa5cb3e8978c12c087568c360, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:19,278 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689193099277"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193099277"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193099277"}]},"ts":"1689193099277"} 2023-07-12 20:18:19,279 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE; CloseRegionProcedure b3a58a5fa5cb3e8978c12c087568c360, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:19,431 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-12 20:18:19,431 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:19,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b3a58a5fa5cb3e8978c12c087568c360, disabling compactions & flushes 2023-07-12 20:18:19,433 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:19,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:19,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. after waiting 0 ms 2023-07-12 20:18:19,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:19,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 20:18:19,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:19,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b3a58a5fa5cb3e8978c12c087568c360: 2023-07-12 20:18:19,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b3a58a5fa5cb3e8978c12c087568c360 move to jenkins-hbase4.apache.org,46283,1689193085424 record at close sequenceid=5 2023-07-12 20:18:19,440 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:19,440 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=b3a58a5fa5cb3e8978c12c087568c360, regionState=CLOSED 2023-07-12 20:18:19,440 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689193099440"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193099440"}]},"ts":"1689193099440"} 2023-07-12 20:18:19,443 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-12 20:18:19,443 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; CloseRegionProcedure b3a58a5fa5cb3e8978c12c087568c360, server=jenkins-hbase4.apache.org,39187,1689193085232 in 163 msec 2023-07-12 20:18:19,444 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b3a58a5fa5cb3e8978c12c087568c360, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46283,1689193085424; forceNewPlan=false, retain=false 2023-07-12 20:18:19,594 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=b3a58a5fa5cb3e8978c12c087568c360, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:19,595 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689193099594"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193099594"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193099594"}]},"ts":"1689193099594"} 2023-07-12 20:18:19,597 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=90, state=RUNNABLE; OpenRegionProcedure b3a58a5fa5cb3e8978c12c087568c360, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:19,753 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:19,753 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b3a58a5fa5cb3e8978c12c087568c360, NAME => 'Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:19,753 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:19,753 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:19,754 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:19,754 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:19,755 INFO [StoreOpener-b3a58a5fa5cb3e8978c12c087568c360-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:19,756 DEBUG [StoreOpener-b3a58a5fa5cb3e8978c12c087568c360-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360/f 2023-07-12 20:18:19,756 DEBUG [StoreOpener-b3a58a5fa5cb3e8978c12c087568c360-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360/f 2023-07-12 20:18:19,757 INFO [StoreOpener-b3a58a5fa5cb3e8978c12c087568c360-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b3a58a5fa5cb3e8978c12c087568c360 columnFamilyName f 2023-07-12 20:18:19,757 INFO [StoreOpener-b3a58a5fa5cb3e8978c12c087568c360-1] regionserver.HStore(310): Store=b3a58a5fa5cb3e8978c12c087568c360/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:19,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:19,760 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:19,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:19,765 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b3a58a5fa5cb3e8978c12c087568c360; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11720773120, jitterRate=0.09158205986022949}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:19,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b3a58a5fa5cb3e8978c12c087568c360: 2023-07-12 20:18:19,766 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360., pid=92, masterSystemTime=1689193099748 2023-07-12 20:18:19,768 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:19,768 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:19,768 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=b3a58a5fa5cb3e8978c12c087568c360, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:19,768 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689193099768"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193099768"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193099768"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193099768"}]},"ts":"1689193099768"} 2023-07-12 20:18:19,772 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=90 2023-07-12 20:18:19,772 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=90, state=SUCCESS; OpenRegionProcedure b3a58a5fa5cb3e8978c12c087568c360, server=jenkins-hbase4.apache.org,46283,1689193085424 in 174 msec 2023-07-12 20:18:19,773 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b3a58a5fa5cb3e8978c12c087568c360, REOPEN/MOVE in 497 msec 2023-07-12 20:18:20,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure.ProcedureSyncWait(216): waitFor pid=90 2023-07-12 20:18:20,277 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-12 20:18:20,277 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:20,281 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:20,281 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:20,284 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-12 20:18:20,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:20,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 295 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:46566 deadline: 1689194300284, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-12 20:18:20,286 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:41567] to rsgroup default 2023-07-12 20:18:20,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:20,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 20:18:20,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:20,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:20,295 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-12 20:18:20,295 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39187,1689193085232, jenkins-hbase4.apache.org,41567,1689193085044, jenkins-hbase4.apache.org,43429,1689193089109] are moved back to bar 2023-07-12 20:18:20,295 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-12 20:18:20,295 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:20,298 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:20,299 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:20,301 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-12 20:18:20,302 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43429] ipc.CallRunner(144): callId: 224 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:58612 deadline: 1689193160302, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46283 startCode=1689193085424. As of locationSeqNum=10. 2023-07-12 20:18:20,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:20,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:20,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 20:18:20,416 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:20,420 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:20,420 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:20,423 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:20,423 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:20,425 INFO [Listener at localhost/36071] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-12 20:18:20,425 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-12 20:18:20,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-12 20:18:20,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-12 20:18:20,430 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193100430"}]},"ts":"1689193100430"} 2023-07-12 20:18:20,437 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-12 20:18:20,439 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-12 20:18:20,440 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b3a58a5fa5cb3e8978c12c087568c360, UNASSIGN}] 2023-07-12 20:18:20,442 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b3a58a5fa5cb3e8978c12c087568c360, UNASSIGN 2023-07-12 20:18:20,443 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=b3a58a5fa5cb3e8978c12c087568c360, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:20,443 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689193100443"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193100443"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193100443"}]},"ts":"1689193100443"} 2023-07-12 20:18:20,445 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE; CloseRegionProcedure b3a58a5fa5cb3e8978c12c087568c360, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:20,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-12 20:18:20,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:20,598 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b3a58a5fa5cb3e8978c12c087568c360, disabling compactions & flushes 2023-07-12 20:18:20,598 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:20,598 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:20,598 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. after waiting 0 ms 2023-07-12 20:18:20,598 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:20,603 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 20:18:20,605 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360. 2023-07-12 20:18:20,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b3a58a5fa5cb3e8978c12c087568c360: 2023-07-12 20:18:20,607 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:20,608 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=b3a58a5fa5cb3e8978c12c087568c360, regionState=CLOSED 2023-07-12 20:18:20,608 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689193100607"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193100607"}]},"ts":"1689193100607"} 2023-07-12 20:18:20,611 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-12 20:18:20,611 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; CloseRegionProcedure b3a58a5fa5cb3e8978c12c087568c360, server=jenkins-hbase4.apache.org,46283,1689193085424 in 164 msec 2023-07-12 20:18:20,613 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=94, resume processing ppid=93 2023-07-12 20:18:20,613 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=94, ppid=93, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b3a58a5fa5cb3e8978c12c087568c360, UNASSIGN in 171 msec 2023-07-12 20:18:20,613 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193100613"}]},"ts":"1689193100613"} 2023-07-12 20:18:20,615 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-12 20:18:20,616 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-12 20:18:20,619 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 191 msec 2023-07-12 20:18:20,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-12 20:18:20,732 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-12 20:18:20,733 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-12 20:18:20,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 20:18:20,737 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 20:18:20,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-12 20:18:20,738 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=96, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 20:18:20,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:20,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:20,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:20,744 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:20,746 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360/recovered.edits] 2023-07-12 20:18:20,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-12 20:18:20,754 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360/recovered.edits/10.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360/recovered.edits/10.seqid 2023-07-12 20:18:20,755 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testFailRemoveGroup/b3a58a5fa5cb3e8978c12c087568c360 2023-07-12 20:18:20,755 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-12 20:18:20,758 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=96, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 20:18:20,760 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-12 20:18:20,762 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-12 20:18:20,764 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=96, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 20:18:20,764 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-12 20:18:20,764 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193100764"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:20,766 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 20:18:20,766 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => b3a58a5fa5cb3e8978c12c087568c360, NAME => 'Group_testFailRemoveGroup,,1689193097620.b3a58a5fa5cb3e8978c12c087568c360.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 20:18:20,766 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-12 20:18:20,766 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689193100766"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:20,768 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-12 20:18:20,770 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=96, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 20:18:20,771 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=96, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 37 msec 2023-07-12 20:18:20,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-12 20:18:20,847 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 96 completed 2023-07-12 20:18:20,851 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:20,851 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:20,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:20,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:20,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:20,853 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:20,853 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:20,854 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:20,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:20,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:20,860 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:20,864 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:20,865 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:20,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:20,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:20,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:20,876 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:20,880 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:20,880 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:20,882 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42533] to rsgroup master 2023-07-12 20:18:20,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:20,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 343 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46566 deadline: 1689194300882, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. 2023-07-12 20:18:20,883 WARN [Listener at localhost/36071] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:20,885 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:20,886 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:20,887 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:20,887 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:46283], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:20,888 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:20,888 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:20,911 INFO [Listener at localhost/36071] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=503 (was 499) Potentially hanging thread: hconnection-0x292363c-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1793218059_17 at /127.0.0.1:38140 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5275ffcd-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1793218059_17 at /127.0.0.1:38132 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5275ffcd-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5fc06702-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x292363c-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5275ffcd-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x292363c-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1793218059_17 at /127.0.0.1:49574 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x292363c-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x292363c-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/cluster_599e48d2-0e92-9211-4f46-ef81fbc5f05c/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/cluster_599e48d2-0e92-9211-4f46-ef81fbc5f05c/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5275ffcd-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x292363c-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1175782679_17 at /127.0.0.1:49594 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1793218059_17 at /127.0.0.1:52034 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=794 (was 796), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=605 (was 589) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=4479 (was 4672) 2023-07-12 20:18:20,911 WARN [Listener at localhost/36071] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-12 20:18:20,936 INFO [Listener at localhost/36071] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=503, OpenFileDescriptor=794, MaxFileDescriptor=60000, SystemLoadAverage=605, ProcessCount=172, AvailableMemoryMB=4478 2023-07-12 20:18:20,936 WARN [Listener at localhost/36071] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-12 20:18:20,936 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-12 20:18:20,943 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:20,943 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:20,944 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:20,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:20,945 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:20,946 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:20,946 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:20,947 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:20,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:20,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:20,954 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:20,958 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:20,959 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:20,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:20,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:20,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:20,966 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:20,970 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:20,970 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:20,972 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42533] to rsgroup master 2023-07-12 20:18:20,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:20,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 371 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46566 deadline: 1689194300972, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. 2023-07-12 20:18:20,973 WARN [Listener at localhost/36071] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:20,976 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:20,977 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:20,977 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:20,977 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:46283], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:20,978 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:20,978 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:20,979 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:20,979 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:20,980 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_1038580445 2023-07-12 20:18:20,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:20,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:20,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1038580445 2023-07-12 20:18:20,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:20,987 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:20,990 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:20,990 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:20,993 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39187] to rsgroup Group_testMultiTableMove_1038580445 2023-07-12 20:18:20,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:20,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1038580445 2023-07-12 20:18:20,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:20,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:20,998 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 20:18:20,999 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39187,1689193085232] are moved back to default 2023-07-12 20:18:20,999 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1038580445 2023-07-12 20:18:20,999 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:21,001 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:21,002 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:21,004 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1038580445 2023-07-12 20:18:21,004 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:21,007 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:21,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 20:18:21,016 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:21,016 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 97 2023-07-12 20:18:21,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-12 20:18:21,019 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:21,019 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1038580445 2023-07-12 20:18:21,020 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:21,020 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:21,023 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 20:18:21,025 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:21,026 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae empty. 2023-07-12 20:18:21,026 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:21,026 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-12 20:18:21,046 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:21,048 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 44aa0af120c6bcd412cf67379cce93ae, NAME => 'GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:21,061 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:21,061 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 44aa0af120c6bcd412cf67379cce93ae, disabling compactions & flushes 2023-07-12 20:18:21,061 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:21,061 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:21,061 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. after waiting 0 ms 2023-07-12 20:18:21,061 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:21,061 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:21,061 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 44aa0af120c6bcd412cf67379cce93ae: 2023-07-12 20:18:21,064 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 20:18:21,065 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193101065"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193101065"}]},"ts":"1689193101065"} 2023-07-12 20:18:21,067 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 20:18:21,079 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 20:18:21,079 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193101079"}]},"ts":"1689193101079"} 2023-07-12 20:18:21,081 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-12 20:18:21,085 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:21,086 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:21,086 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:21,086 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:21,086 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:21,086 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=44aa0af120c6bcd412cf67379cce93ae, ASSIGN}] 2023-07-12 20:18:21,089 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=44aa0af120c6bcd412cf67379cce93ae, ASSIGN 2023-07-12 20:18:21,090 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=44aa0af120c6bcd412cf67379cce93ae, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41567,1689193085044; forceNewPlan=false, retain=false 2023-07-12 20:18:21,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-12 20:18:21,241 INFO [jenkins-hbase4:42533] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 20:18:21,243 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=44aa0af120c6bcd412cf67379cce93ae, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:21,243 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193101243"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193101243"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193101243"}]},"ts":"1689193101243"} 2023-07-12 20:18:21,246 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 44aa0af120c6bcd412cf67379cce93ae, server=jenkins-hbase4.apache.org,41567,1689193085044}] 2023-07-12 20:18:21,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-12 20:18:21,403 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:21,403 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 44aa0af120c6bcd412cf67379cce93ae, NAME => 'GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:21,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:21,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:21,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:21,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:21,405 INFO [StoreOpener-44aa0af120c6bcd412cf67379cce93ae-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:21,407 DEBUG [StoreOpener-44aa0af120c6bcd412cf67379cce93ae-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae/f 2023-07-12 20:18:21,407 DEBUG [StoreOpener-44aa0af120c6bcd412cf67379cce93ae-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae/f 2023-07-12 20:18:21,407 INFO [StoreOpener-44aa0af120c6bcd412cf67379cce93ae-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 44aa0af120c6bcd412cf67379cce93ae columnFamilyName f 2023-07-12 20:18:21,408 INFO [StoreOpener-44aa0af120c6bcd412cf67379cce93ae-1] regionserver.HStore(310): Store=44aa0af120c6bcd412cf67379cce93ae/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:21,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:21,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:21,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:21,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:21,414 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 44aa0af120c6bcd412cf67379cce93ae; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11099432800, jitterRate=0.03371523320674896}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:21,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 44aa0af120c6bcd412cf67379cce93ae: 2023-07-12 20:18:21,414 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae., pid=99, masterSystemTime=1689193101399 2023-07-12 20:18:21,416 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:21,416 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:21,416 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=44aa0af120c6bcd412cf67379cce93ae, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:21,416 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193101416"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193101416"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193101416"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193101416"}]},"ts":"1689193101416"} 2023-07-12 20:18:21,419 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-12 20:18:21,420 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 44aa0af120c6bcd412cf67379cce93ae, server=jenkins-hbase4.apache.org,41567,1689193085044 in 172 msec 2023-07-12 20:18:21,421 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-12 20:18:21,421 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=44aa0af120c6bcd412cf67379cce93ae, ASSIGN in 333 msec 2023-07-12 20:18:21,422 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 20:18:21,422 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193101422"}]},"ts":"1689193101422"} 2023-07-12 20:18:21,423 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-12 20:18:21,429 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 20:18:21,430 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 422 msec 2023-07-12 20:18:21,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-12 20:18:21,623 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 97 completed 2023-07-12 20:18:21,623 DEBUG [Listener at localhost/36071] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-12 20:18:21,623 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:21,629 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-12 20:18:21,629 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:21,629 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-12 20:18:21,632 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:21,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 20:18:21,636 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:21,637 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 100 2023-07-12 20:18:21,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-12 20:18:21,640 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:21,640 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1038580445 2023-07-12 20:18:21,641 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:21,641 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:21,644 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 20:18:21,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-12 20:18:21,874 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a 2023-07-12 20:18:21,875 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a empty. 2023-07-12 20:18:21,875 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a 2023-07-12 20:18:21,876 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-12 20:18:21,953 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:21,955 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 49031278d9914261cd0a796f942b809a, NAME => 'GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:22,016 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:22,016 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 49031278d9914261cd0a796f942b809a, disabling compactions & flushes 2023-07-12 20:18:22,016 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:22,016 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:22,016 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. after waiting 0 ms 2023-07-12 20:18:22,016 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:22,016 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:22,016 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 49031278d9914261cd0a796f942b809a: 2023-07-12 20:18:22,020 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 20:18:22,021 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193102021"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193102021"}]},"ts":"1689193102021"} 2023-07-12 20:18:22,023 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 20:18:22,024 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 20:18:22,024 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193102024"}]},"ts":"1689193102024"} 2023-07-12 20:18:22,026 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-12 20:18:22,029 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:22,029 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:22,029 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:22,029 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:22,029 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:22,030 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=49031278d9914261cd0a796f942b809a, ASSIGN}] 2023-07-12 20:18:22,032 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=49031278d9914261cd0a796f942b809a, ASSIGN 2023-07-12 20:18:22,033 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=49031278d9914261cd0a796f942b809a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46283,1689193085424; forceNewPlan=false, retain=false 2023-07-12 20:18:22,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-12 20:18:22,183 INFO [jenkins-hbase4:42533] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 20:18:22,184 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=49031278d9914261cd0a796f942b809a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:22,185 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193102184"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193102184"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193102184"}]},"ts":"1689193102184"} 2023-07-12 20:18:22,186 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; OpenRegionProcedure 49031278d9914261cd0a796f942b809a, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:22,342 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:22,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 49031278d9914261cd0a796f942b809a, NAME => 'GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:22,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 49031278d9914261cd0a796f942b809a 2023-07-12 20:18:22,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:22,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 49031278d9914261cd0a796f942b809a 2023-07-12 20:18:22,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 49031278d9914261cd0a796f942b809a 2023-07-12 20:18:22,344 INFO [StoreOpener-49031278d9914261cd0a796f942b809a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 49031278d9914261cd0a796f942b809a 2023-07-12 20:18:22,345 DEBUG [StoreOpener-49031278d9914261cd0a796f942b809a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a/f 2023-07-12 20:18:22,346 DEBUG [StoreOpener-49031278d9914261cd0a796f942b809a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a/f 2023-07-12 20:18:22,346 INFO [StoreOpener-49031278d9914261cd0a796f942b809a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 49031278d9914261cd0a796f942b809a columnFamilyName f 2023-07-12 20:18:22,346 INFO [StoreOpener-49031278d9914261cd0a796f942b809a-1] regionserver.HStore(310): Store=49031278d9914261cd0a796f942b809a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:22,347 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a 2023-07-12 20:18:22,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a 2023-07-12 20:18:22,350 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 49031278d9914261cd0a796f942b809a 2023-07-12 20:18:22,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:22,352 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 49031278d9914261cd0a796f942b809a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11922971520, jitterRate=0.11041325330734253}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:22,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 49031278d9914261cd0a796f942b809a: 2023-07-12 20:18:22,353 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a., pid=102, masterSystemTime=1689193102338 2023-07-12 20:18:22,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:22,355 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:22,355 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=49031278d9914261cd0a796f942b809a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:22,355 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193102355"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193102355"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193102355"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193102355"}]},"ts":"1689193102355"} 2023-07-12 20:18:22,361 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-12 20:18:22,361 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; OpenRegionProcedure 49031278d9914261cd0a796f942b809a, server=jenkins-hbase4.apache.org,46283,1689193085424 in 171 msec 2023-07-12 20:18:22,363 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-12 20:18:22,363 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=49031278d9914261cd0a796f942b809a, ASSIGN in 331 msec 2023-07-12 20:18:22,364 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 20:18:22,364 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193102364"}]},"ts":"1689193102364"} 2023-07-12 20:18:22,368 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-12 20:18:22,370 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 20:18:22,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-12 20:18:22,373 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 739 msec 2023-07-12 20:18:22,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-12 20:18:22,874 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 100 completed 2023-07-12 20:18:22,874 DEBUG [Listener at localhost/36071] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-12 20:18:22,875 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:22,879 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-12 20:18:22,879 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:22,879 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-12 20:18:22,880 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:22,892 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-12 20:18:22,892 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 20:18:22,893 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-12 20:18:22,893 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 20:18:22,894 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1038580445 2023-07-12 20:18:22,896 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1038580445 2023-07-12 20:18:22,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:22,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1038580445 2023-07-12 20:18:22,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:22,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:22,901 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1038580445 2023-07-12 20:18:22,901 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(345): Moving region 49031278d9914261cd0a796f942b809a to RSGroup Group_testMultiTableMove_1038580445 2023-07-12 20:18:22,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=49031278d9914261cd0a796f942b809a, REOPEN/MOVE 2023-07-12 20:18:22,902 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1038580445 2023-07-12 20:18:22,903 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=49031278d9914261cd0a796f942b809a, REOPEN/MOVE 2023-07-12 20:18:22,903 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(345): Moving region 44aa0af120c6bcd412cf67379cce93ae to RSGroup Group_testMultiTableMove_1038580445 2023-07-12 20:18:22,904 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=49031278d9914261cd0a796f942b809a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:22,904 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193102904"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193102904"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193102904"}]},"ts":"1689193102904"} 2023-07-12 20:18:22,906 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=103, state=RUNNABLE; CloseRegionProcedure 49031278d9914261cd0a796f942b809a, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:22,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=44aa0af120c6bcd412cf67379cce93ae, REOPEN/MOVE 2023-07-12 20:18:22,907 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1038580445, current retry=0 2023-07-12 20:18:22,909 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=44aa0af120c6bcd412cf67379cce93ae, REOPEN/MOVE 2023-07-12 20:18:22,911 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=44aa0af120c6bcd412cf67379cce93ae, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:22,911 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193102911"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193102911"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193102911"}]},"ts":"1689193102911"} 2023-07-12 20:18:22,912 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=104, state=RUNNABLE; CloseRegionProcedure 44aa0af120c6bcd412cf67379cce93ae, server=jenkins-hbase4.apache.org,41567,1689193085044}] 2023-07-12 20:18:23,061 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 49031278d9914261cd0a796f942b809a 2023-07-12 20:18:23,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 49031278d9914261cd0a796f942b809a, disabling compactions & flushes 2023-07-12 20:18:23,062 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:23,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:23,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. after waiting 0 ms 2023-07-12 20:18:23,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:23,066 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:23,067 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 44aa0af120c6bcd412cf67379cce93ae, disabling compactions & flushes 2023-07-12 20:18:23,067 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:23,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:23,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. after waiting 0 ms 2023-07-12 20:18:23,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:23,071 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:23,072 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:23,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 49031278d9914261cd0a796f942b809a: 2023-07-12 20:18:23,072 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 49031278d9914261cd0a796f942b809a move to jenkins-hbase4.apache.org,39187,1689193085232 record at close sequenceid=2 2023-07-12 20:18:23,074 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:23,074 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:23,074 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 44aa0af120c6bcd412cf67379cce93ae: 2023-07-12 20:18:23,074 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 44aa0af120c6bcd412cf67379cce93ae move to jenkins-hbase4.apache.org,39187,1689193085232 record at close sequenceid=2 2023-07-12 20:18:23,078 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 49031278d9914261cd0a796f942b809a 2023-07-12 20:18:23,079 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=49031278d9914261cd0a796f942b809a, regionState=CLOSED 2023-07-12 20:18:23,079 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193103078"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193103078"}]},"ts":"1689193103078"} 2023-07-12 20:18:23,079 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:23,080 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=44aa0af120c6bcd412cf67379cce93ae, regionState=CLOSED 2023-07-12 20:18:23,080 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193103080"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193103080"}]},"ts":"1689193103080"} 2023-07-12 20:18:23,082 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=103 2023-07-12 20:18:23,082 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=103, state=SUCCESS; CloseRegionProcedure 49031278d9914261cd0a796f942b809a, server=jenkins-hbase4.apache.org,46283,1689193085424 in 174 msec 2023-07-12 20:18:23,083 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=49031278d9914261cd0a796f942b809a, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39187,1689193085232; forceNewPlan=false, retain=false 2023-07-12 20:18:23,084 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=104 2023-07-12 20:18:23,084 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=104, state=SUCCESS; CloseRegionProcedure 44aa0af120c6bcd412cf67379cce93ae, server=jenkins-hbase4.apache.org,41567,1689193085044 in 170 msec 2023-07-12 20:18:23,085 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=44aa0af120c6bcd412cf67379cce93ae, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39187,1689193085232; forceNewPlan=false, retain=false 2023-07-12 20:18:23,234 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=44aa0af120c6bcd412cf67379cce93ae, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:23,234 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=49031278d9914261cd0a796f942b809a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:23,234 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193103234"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193103234"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193103234"}]},"ts":"1689193103234"} 2023-07-12 20:18:23,234 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193103234"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193103234"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193103234"}]},"ts":"1689193103234"} 2023-07-12 20:18:23,236 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=104, state=RUNNABLE; OpenRegionProcedure 44aa0af120c6bcd412cf67379cce93ae, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:23,236 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=103, state=RUNNABLE; OpenRegionProcedure 49031278d9914261cd0a796f942b809a, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:23,392 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:23,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 49031278d9914261cd0a796f942b809a, NAME => 'GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:23,393 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 49031278d9914261cd0a796f942b809a 2023-07-12 20:18:23,393 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:23,393 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 49031278d9914261cd0a796f942b809a 2023-07-12 20:18:23,393 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 49031278d9914261cd0a796f942b809a 2023-07-12 20:18:23,394 INFO [StoreOpener-49031278d9914261cd0a796f942b809a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 49031278d9914261cd0a796f942b809a 2023-07-12 20:18:23,395 DEBUG [StoreOpener-49031278d9914261cd0a796f942b809a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a/f 2023-07-12 20:18:23,395 DEBUG [StoreOpener-49031278d9914261cd0a796f942b809a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a/f 2023-07-12 20:18:23,396 INFO [StoreOpener-49031278d9914261cd0a796f942b809a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 49031278d9914261cd0a796f942b809a columnFamilyName f 2023-07-12 20:18:23,396 INFO [StoreOpener-49031278d9914261cd0a796f942b809a-1] regionserver.HStore(310): Store=49031278d9914261cd0a796f942b809a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:23,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a 2023-07-12 20:18:23,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a 2023-07-12 20:18:23,401 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 49031278d9914261cd0a796f942b809a 2023-07-12 20:18:23,402 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 49031278d9914261cd0a796f942b809a; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11818510240, jitterRate=0.10068453848361969}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:23,402 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 49031278d9914261cd0a796f942b809a: 2023-07-12 20:18:23,403 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a., pid=108, masterSystemTime=1689193103388 2023-07-12 20:18:23,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:23,405 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:23,405 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:23,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 44aa0af120c6bcd412cf67379cce93ae, NAME => 'GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:23,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:23,405 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=49031278d9914261cd0a796f942b809a, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:23,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:23,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:23,406 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193103405"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193103405"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193103405"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193103405"}]},"ts":"1689193103405"} 2023-07-12 20:18:23,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:23,407 INFO [StoreOpener-44aa0af120c6bcd412cf67379cce93ae-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:23,408 DEBUG [StoreOpener-44aa0af120c6bcd412cf67379cce93ae-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae/f 2023-07-12 20:18:23,408 DEBUG [StoreOpener-44aa0af120c6bcd412cf67379cce93ae-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae/f 2023-07-12 20:18:23,408 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=103 2023-07-12 20:18:23,409 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=103, state=SUCCESS; OpenRegionProcedure 49031278d9914261cd0a796f942b809a, server=jenkins-hbase4.apache.org,39187,1689193085232 in 171 msec 2023-07-12 20:18:23,409 INFO [StoreOpener-44aa0af120c6bcd412cf67379cce93ae-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 44aa0af120c6bcd412cf67379cce93ae columnFamilyName f 2023-07-12 20:18:23,409 INFO [StoreOpener-44aa0af120c6bcd412cf67379cce93ae-1] regionserver.HStore(310): Store=44aa0af120c6bcd412cf67379cce93ae/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:23,410 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=49031278d9914261cd0a796f942b809a, REOPEN/MOVE in 508 msec 2023-07-12 20:18:23,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:23,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:23,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:23,415 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 44aa0af120c6bcd412cf67379cce93ae; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9811042240, jitterRate=-0.08627548813819885}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:23,415 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 44aa0af120c6bcd412cf67379cce93ae: 2023-07-12 20:18:23,415 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae., pid=107, masterSystemTime=1689193103388 2023-07-12 20:18:23,416 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:23,416 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:23,418 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=44aa0af120c6bcd412cf67379cce93ae, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:23,418 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193103418"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193103418"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193103418"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193103418"}]},"ts":"1689193103418"} 2023-07-12 20:18:23,421 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=104 2023-07-12 20:18:23,421 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=104, state=SUCCESS; OpenRegionProcedure 44aa0af120c6bcd412cf67379cce93ae, server=jenkins-hbase4.apache.org,39187,1689193085232 in 183 msec 2023-07-12 20:18:23,422 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=44aa0af120c6bcd412cf67379cce93ae, REOPEN/MOVE in 517 msec 2023-07-12 20:18:23,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure.ProcedureSyncWait(216): waitFor pid=103 2023-07-12 20:18:23,910 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1038580445. 2023-07-12 20:18:23,910 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:23,913 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:23,914 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:23,916 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-12 20:18:23,916 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 20:18:23,917 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-12 20:18:23,917 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 20:18:23,918 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:23,918 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:23,919 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1038580445 2023-07-12 20:18:23,919 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:23,921 INFO [Listener at localhost/36071] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-12 20:18:23,921 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-12 20:18:23,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 20:18:23,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-12 20:18:23,925 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193103925"}]},"ts":"1689193103925"} 2023-07-12 20:18:23,926 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-12 20:18:23,928 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-12 20:18:23,929 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=44aa0af120c6bcd412cf67379cce93ae, UNASSIGN}] 2023-07-12 20:18:23,930 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=44aa0af120c6bcd412cf67379cce93ae, UNASSIGN 2023-07-12 20:18:23,931 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=44aa0af120c6bcd412cf67379cce93ae, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:23,931 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193103931"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193103931"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193103931"}]},"ts":"1689193103931"} 2023-07-12 20:18:23,932 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE; CloseRegionProcedure 44aa0af120c6bcd412cf67379cce93ae, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:23,938 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 20:18:24,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-12 20:18:24,084 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:24,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 44aa0af120c6bcd412cf67379cce93ae, disabling compactions & flushes 2023-07-12 20:18:24,086 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:24,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:24,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. after waiting 0 ms 2023-07-12 20:18:24,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:24,090 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 20:18:24,092 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae. 2023-07-12 20:18:24,092 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 44aa0af120c6bcd412cf67379cce93ae: 2023-07-12 20:18:24,096 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:24,096 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=44aa0af120c6bcd412cf67379cce93ae, regionState=CLOSED 2023-07-12 20:18:24,097 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193104096"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193104096"}]},"ts":"1689193104096"} 2023-07-12 20:18:24,101 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-12 20:18:24,101 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; CloseRegionProcedure 44aa0af120c6bcd412cf67379cce93ae, server=jenkins-hbase4.apache.org,39187,1689193085232 in 167 msec 2023-07-12 20:18:24,104 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=109 2023-07-12 20:18:24,104 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=109, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=44aa0af120c6bcd412cf67379cce93ae, UNASSIGN in 172 msec 2023-07-12 20:18:24,106 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193104106"}]},"ts":"1689193104106"} 2023-07-12 20:18:24,107 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-12 20:18:24,109 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-12 20:18:24,111 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 189 msec 2023-07-12 20:18:24,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-12 20:18:24,227 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-12 20:18:24,228 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-12 20:18:24,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 20:18:24,231 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 20:18:24,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1038580445' 2023-07-12 20:18:24,232 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=112, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 20:18:24,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:24,241 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:24,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1038580445 2023-07-12 20:18:24,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:24,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:24,243 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae/recovered.edits] 2023-07-12 20:18:24,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-12 20:18:24,249 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae/recovered.edits/7.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae/recovered.edits/7.seqid 2023-07-12 20:18:24,249 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveA/44aa0af120c6bcd412cf67379cce93ae 2023-07-12 20:18:24,249 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-12 20:18:24,252 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=112, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 20:18:24,254 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-12 20:18:24,256 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-12 20:18:24,257 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=112, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 20:18:24,257 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-12 20:18:24,258 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193104257"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:24,260 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 20:18:24,260 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 44aa0af120c6bcd412cf67379cce93ae, NAME => 'GrouptestMultiTableMoveA,,1689193101006.44aa0af120c6bcd412cf67379cce93ae.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 20:18:24,260 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-12 20:18:24,260 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689193104260"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:24,262 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-12 20:18:24,264 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=112, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 20:18:24,266 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=112, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 37 msec 2023-07-12 20:18:24,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-12 20:18:24,346 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 112 completed 2023-07-12 20:18:24,347 INFO [Listener at localhost/36071] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-12 20:18:24,347 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-12 20:18:24,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 20:18:24,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-12 20:18:24,356 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193104355"}]},"ts":"1689193104355"} 2023-07-12 20:18:24,357 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-12 20:18:24,360 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-12 20:18:24,362 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=49031278d9914261cd0a796f942b809a, UNASSIGN}] 2023-07-12 20:18:24,363 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=49031278d9914261cd0a796f942b809a, UNASSIGN 2023-07-12 20:18:24,364 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=49031278d9914261cd0a796f942b809a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:24,364 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193104364"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193104364"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193104364"}]},"ts":"1689193104364"} 2023-07-12 20:18:24,365 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure 49031278d9914261cd0a796f942b809a, server=jenkins-hbase4.apache.org,39187,1689193085232}] 2023-07-12 20:18:24,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-12 20:18:24,519 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 49031278d9914261cd0a796f942b809a 2023-07-12 20:18:24,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 49031278d9914261cd0a796f942b809a, disabling compactions & flushes 2023-07-12 20:18:24,520 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:24,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:24,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. after waiting 0 ms 2023-07-12 20:18:24,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:24,526 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 20:18:24,528 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a. 2023-07-12 20:18:24,528 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 49031278d9914261cd0a796f942b809a: 2023-07-12 20:18:24,532 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 49031278d9914261cd0a796f942b809a 2023-07-12 20:18:24,532 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=49031278d9914261cd0a796f942b809a, regionState=CLOSED 2023-07-12 20:18:24,532 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689193104532"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193104532"}]},"ts":"1689193104532"} 2023-07-12 20:18:24,536 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-12 20:18:24,536 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure 49031278d9914261cd0a796f942b809a, server=jenkins-hbase4.apache.org,39187,1689193085232 in 169 msec 2023-07-12 20:18:24,539 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=114, resume processing ppid=113 2023-07-12 20:18:24,540 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=114, ppid=113, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=49031278d9914261cd0a796f942b809a, UNASSIGN in 176 msec 2023-07-12 20:18:24,540 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193104540"}]},"ts":"1689193104540"} 2023-07-12 20:18:24,542 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-12 20:18:24,544 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-12 20:18:24,546 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 198 msec 2023-07-12 20:18:24,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-12 20:18:24,658 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-12 20:18:24,659 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-12 20:18:24,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 20:18:24,663 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 20:18:24,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1038580445' 2023-07-12 20:18:24,665 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=116, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 20:18:24,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:24,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1038580445 2023-07-12 20:18:24,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:24,669 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a 2023-07-12 20:18:24,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:24,673 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a/recovered.edits] 2023-07-12 20:18:24,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-12 20:18:24,680 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a/recovered.edits/7.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a/recovered.edits/7.seqid 2023-07-12 20:18:24,680 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/GrouptestMultiTableMoveB/49031278d9914261cd0a796f942b809a 2023-07-12 20:18:24,681 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-12 20:18:24,687 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=116, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 20:18:24,692 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-12 20:18:24,697 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-12 20:18:24,698 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=116, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 20:18:24,698 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-12 20:18:24,698 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193104698"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:24,702 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 20:18:24,702 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 49031278d9914261cd0a796f942b809a, NAME => 'GrouptestMultiTableMoveB,,1689193101631.49031278d9914261cd0a796f942b809a.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 20:18:24,702 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-12 20:18:24,714 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689193104714"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:24,716 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-12 20:18:24,719 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=116, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 20:18:24,720 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=116, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 60 msec 2023-07-12 20:18:24,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-12 20:18:24,779 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 116 completed 2023-07-12 20:18:24,783 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:24,783 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:24,784 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:24,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:24,785 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:24,786 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39187] to rsgroup default 2023-07-12 20:18:24,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:24,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1038580445 2023-07-12 20:18:24,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:24,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:24,791 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1038580445, current retry=0 2023-07-12 20:18:24,791 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39187,1689193085232] are moved back to Group_testMultiTableMove_1038580445 2023-07-12 20:18:24,791 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1038580445 => default 2023-07-12 20:18:24,792 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:24,792 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_1038580445 2023-07-12 20:18:24,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:24,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:24,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 20:18:24,799 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:24,800 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:24,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:24,801 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:24,802 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:24,802 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:24,803 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:24,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:24,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:24,811 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:24,815 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:24,816 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:24,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:24,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:24,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:24,828 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:24,831 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:24,832 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:24,835 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42533] to rsgroup master 2023-07-12 20:18:24,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:24,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 511 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46566 deadline: 1689194304834, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. 2023-07-12 20:18:24,835 WARN [Listener at localhost/36071] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:24,837 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:24,838 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:24,838 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:24,838 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:46283], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:24,839 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:24,839 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:24,864 INFO [Listener at localhost/36071] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=498 (was 503), OpenFileDescriptor=763 (was 794), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=605 (was 605), ProcessCount=172 (was 172), AvailableMemoryMB=4374 (was 4478) 2023-07-12 20:18:24,883 INFO [Listener at localhost/36071] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=498, OpenFileDescriptor=763, MaxFileDescriptor=60000, SystemLoadAverage=605, ProcessCount=172, AvailableMemoryMB=4370 2023-07-12 20:18:24,883 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-12 20:18:24,887 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:24,888 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:24,889 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:24,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:24,889 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:24,889 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:24,890 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:24,890 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:24,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:24,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:24,896 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:24,899 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:24,900 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:24,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:24,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:24,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:24,906 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:24,922 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:24,922 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:24,925 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42533] to rsgroup master 2023-07-12 20:18:24,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:24,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 539 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46566 deadline: 1689194304924, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. 2023-07-12 20:18:24,925 WARN [Listener at localhost/36071] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:24,927 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:24,928 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:24,928 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:24,929 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:46283], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:24,929 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:24,930 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:24,930 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:24,931 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:24,931 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-12 20:18:24,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:24,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 20:18:24,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:24,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:24,941 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:24,945 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:24,946 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:24,949 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567] to rsgroup oldGroup 2023-07-12 20:18:24,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:24,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 20:18:24,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:24,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:24,956 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 20:18:24,956 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39187,1689193085232, jenkins-hbase4.apache.org,41567,1689193085044] are moved back to default 2023-07-12 20:18:24,956 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-12 20:18:24,956 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:24,959 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:24,959 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:24,962 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-12 20:18:24,962 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:24,963 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-12 20:18:24,963 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:24,964 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:24,964 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:24,966 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-12 20:18:24,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:24,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 20:18:24,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 20:18:24,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:24,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 20:18:24,977 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:24,982 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:24,982 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:24,985 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43429] to rsgroup anotherRSGroup 2023-07-12 20:18:24,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:24,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 20:18:24,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 20:18:24,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:24,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 20:18:24,991 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 20:18:24,991 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43429,1689193089109] are moved back to default 2023-07-12 20:18:24,991 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-12 20:18:24,991 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:24,994 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:24,994 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:24,999 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-12 20:18:24,999 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:25,000 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-12 20:18:25,000 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:25,006 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-12 20:18:25,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:25,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:46566 deadline: 1689194305005, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-12 20:18:25,008 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-12 20:18:25,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:25,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:46566 deadline: 1689194305007, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-12 20:18:25,009 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-12 20:18:25,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:25,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 577 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:46566 deadline: 1689194305009, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-12 20:18:25,010 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-12 20:18:25,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:25,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 579 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:46566 deadline: 1689194305010, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-12 20:18:25,014 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:25,014 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:25,015 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:25,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:25,016 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:25,016 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43429] to rsgroup default 2023-07-12 20:18:25,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:25,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 20:18:25,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 20:18:25,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:25,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 20:18:25,026 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-12 20:18:25,026 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43429,1689193089109] are moved back to anotherRSGroup 2023-07-12 20:18:25,026 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-12 20:18:25,027 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:25,043 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-12 20:18:25,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:25,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 20:18:25,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:25,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 20:18:25,059 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:25,060 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:25,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:25,060 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:25,061 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567] to rsgroup default 2023-07-12 20:18:25,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:25,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 20:18:25,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:25,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:25,075 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-12 20:18:25,076 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39187,1689193085232, jenkins-hbase4.apache.org,41567,1689193085044] are moved back to oldGroup 2023-07-12 20:18:25,076 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-12 20:18:25,076 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:25,077 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-12 20:18:25,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:25,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:25,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 20:18:25,082 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:25,083 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:25,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:25,083 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:25,084 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:25,084 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:25,085 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:25,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:25,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:25,095 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:25,098 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:25,098 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:25,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:25,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:25,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:25,106 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:25,108 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:25,109 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:25,111 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42533] to rsgroup master 2023-07-12 20:18:25,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:25,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 615 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46566 deadline: 1689194305110, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. 2023-07-12 20:18:25,111 WARN [Listener at localhost/36071] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:25,113 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:25,113 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:25,114 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:25,114 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:46283], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:25,115 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:25,115 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:25,132 INFO [Listener at localhost/36071] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=502 (was 498) Potentially hanging thread: hconnection-0x5275ffcd-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5275ffcd-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5275ffcd-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5275ffcd-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=763 (was 763), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=605 (was 605), ProcessCount=173 (was 172) - ProcessCount LEAK? -, AvailableMemoryMB=4288 (was 4370) 2023-07-12 20:18:25,133 WARN [Listener at localhost/36071] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-12 20:18:25,152 INFO [Listener at localhost/36071] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=502, OpenFileDescriptor=763, MaxFileDescriptor=60000, SystemLoadAverage=605, ProcessCount=173, AvailableMemoryMB=4282 2023-07-12 20:18:25,152 WARN [Listener at localhost/36071] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-12 20:18:25,152 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-12 20:18:25,157 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:25,157 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:25,158 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:25,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:25,158 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:25,159 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:25,159 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:25,160 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:25,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:25,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:25,166 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:25,170 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:25,170 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:25,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:25,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:25,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:25,176 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:25,181 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:25,181 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:25,184 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42533] to rsgroup master 2023-07-12 20:18:25,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:25,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 643 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46566 deadline: 1689194305184, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. 2023-07-12 20:18:25,185 WARN [Listener at localhost/36071] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:25,187 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:25,188 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:25,188 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:25,189 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:46283], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:25,189 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:25,189 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:25,190 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:25,191 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:25,192 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-12 20:18:25,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 20:18:25,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:25,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:25,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:25,199 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:25,202 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:25,202 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:25,205 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567] to rsgroup oldgroup 2023-07-12 20:18:25,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 20:18:25,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:25,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:25,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:25,212 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 20:18:25,213 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39187,1689193085232, jenkins-hbase4.apache.org,41567,1689193085044] are moved back to default 2023-07-12 20:18:25,213 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-12 20:18:25,213 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:25,215 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:25,216 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:25,218 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-12 20:18:25,218 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:25,220 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:25,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-12 20:18:25,222 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:25,222 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 117 2023-07-12 20:18:25,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 20:18:25,224 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 20:18:25,224 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:25,225 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:25,225 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:25,229 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 20:18:25,230 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:25,231 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0 empty. 2023-07-12 20:18:25,232 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:25,232 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-12 20:18:25,252 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:25,254 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5590a98ce5dcc4d33b0fc067112783c0, NAME => 'testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:25,277 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:25,277 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 5590a98ce5dcc4d33b0fc067112783c0, disabling compactions & flushes 2023-07-12 20:18:25,277 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:25,277 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:25,277 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. after waiting 0 ms 2023-07-12 20:18:25,277 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:25,277 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:25,277 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 5590a98ce5dcc4d33b0fc067112783c0: 2023-07-12 20:18:25,279 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 20:18:25,280 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689193105280"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193105280"}]},"ts":"1689193105280"} 2023-07-12 20:18:25,282 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 20:18:25,283 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 20:18:25,283 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193105283"}]},"ts":"1689193105283"} 2023-07-12 20:18:25,284 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-12 20:18:25,287 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:25,287 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:25,287 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:25,287 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:25,287 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=5590a98ce5dcc4d33b0fc067112783c0, ASSIGN}] 2023-07-12 20:18:25,289 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=5590a98ce5dcc4d33b0fc067112783c0, ASSIGN 2023-07-12 20:18:25,291 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=5590a98ce5dcc4d33b0fc067112783c0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46283,1689193085424; forceNewPlan=false, retain=false 2023-07-12 20:18:25,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 20:18:25,441 INFO [jenkins-hbase4:42533] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 20:18:25,442 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=5590a98ce5dcc4d33b0fc067112783c0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:25,443 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689193105442"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193105442"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193105442"}]},"ts":"1689193105442"} 2023-07-12 20:18:25,444 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure 5590a98ce5dcc4d33b0fc067112783c0, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:25,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 20:18:25,600 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:25,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5590a98ce5dcc4d33b0fc067112783c0, NAME => 'testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:25,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:25,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:25,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:25,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:25,602 INFO [StoreOpener-5590a98ce5dcc4d33b0fc067112783c0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:25,604 DEBUG [StoreOpener-5590a98ce5dcc4d33b0fc067112783c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0/tr 2023-07-12 20:18:25,604 DEBUG [StoreOpener-5590a98ce5dcc4d33b0fc067112783c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0/tr 2023-07-12 20:18:25,604 INFO [StoreOpener-5590a98ce5dcc4d33b0fc067112783c0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5590a98ce5dcc4d33b0fc067112783c0 columnFamilyName tr 2023-07-12 20:18:25,605 INFO [StoreOpener-5590a98ce5dcc4d33b0fc067112783c0-1] regionserver.HStore(310): Store=5590a98ce5dcc4d33b0fc067112783c0/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:25,605 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:25,606 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:25,608 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:25,610 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:25,610 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5590a98ce5dcc4d33b0fc067112783c0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10811535680, jitterRate=0.006902724504470825}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:25,610 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5590a98ce5dcc4d33b0fc067112783c0: 2023-07-12 20:18:25,611 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0., pid=119, masterSystemTime=1689193105595 2023-07-12 20:18:25,613 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:25,613 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:25,613 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=5590a98ce5dcc4d33b0fc067112783c0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:25,613 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689193105613"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193105613"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193105613"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193105613"}]},"ts":"1689193105613"} 2023-07-12 20:18:25,616 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-12 20:18:25,616 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure 5590a98ce5dcc4d33b0fc067112783c0, server=jenkins-hbase4.apache.org,46283,1689193085424 in 171 msec 2023-07-12 20:18:25,618 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-12 20:18:25,618 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=5590a98ce5dcc4d33b0fc067112783c0, ASSIGN in 329 msec 2023-07-12 20:18:25,618 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 20:18:25,618 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193105618"}]},"ts":"1689193105618"} 2023-07-12 20:18:25,620 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-12 20:18:25,623 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 20:18:25,624 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=testRename in 403 msec 2023-07-12 20:18:25,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 20:18:25,826 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 117 completed 2023-07-12 20:18:25,826 DEBUG [Listener at localhost/36071] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-12 20:18:25,827 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:25,829 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-12 20:18:25,829 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:25,830 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-12 20:18:25,831 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-12 20:18:25,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 20:18:25,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:25,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:25,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:25,837 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-12 20:18:25,837 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(345): Moving region 5590a98ce5dcc4d33b0fc067112783c0 to RSGroup oldgroup 2023-07-12 20:18:25,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:25,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:25,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:25,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:25,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:25,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=5590a98ce5dcc4d33b0fc067112783c0, REOPEN/MOVE 2023-07-12 20:18:25,839 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-12 20:18:25,839 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=5590a98ce5dcc4d33b0fc067112783c0, REOPEN/MOVE 2023-07-12 20:18:25,839 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=5590a98ce5dcc4d33b0fc067112783c0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:25,839 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689193105839"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193105839"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193105839"}]},"ts":"1689193105839"} 2023-07-12 20:18:25,840 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure 5590a98ce5dcc4d33b0fc067112783c0, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:25,993 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:25,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5590a98ce5dcc4d33b0fc067112783c0, disabling compactions & flushes 2023-07-12 20:18:25,995 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:25,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:25,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. after waiting 0 ms 2023-07-12 20:18:25,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:26,000 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:26,001 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:26,001 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5590a98ce5dcc4d33b0fc067112783c0: 2023-07-12 20:18:26,001 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5590a98ce5dcc4d33b0fc067112783c0 move to jenkins-hbase4.apache.org,41567,1689193085044 record at close sequenceid=2 2023-07-12 20:18:26,002 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:26,003 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=5590a98ce5dcc4d33b0fc067112783c0, regionState=CLOSED 2023-07-12 20:18:26,003 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689193106003"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193106003"}]},"ts":"1689193106003"} 2023-07-12 20:18:26,006 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-12 20:18:26,006 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure 5590a98ce5dcc4d33b0fc067112783c0, server=jenkins-hbase4.apache.org,46283,1689193085424 in 165 msec 2023-07-12 20:18:26,007 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=5590a98ce5dcc4d33b0fc067112783c0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41567,1689193085044; forceNewPlan=false, retain=false 2023-07-12 20:18:26,157 INFO [jenkins-hbase4:42533] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 20:18:26,157 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=5590a98ce5dcc4d33b0fc067112783c0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:26,158 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689193106157"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193106157"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193106157"}]},"ts":"1689193106157"} 2023-07-12 20:18:26,159 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure 5590a98ce5dcc4d33b0fc067112783c0, server=jenkins-hbase4.apache.org,41567,1689193085044}] 2023-07-12 20:18:26,314 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:26,315 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5590a98ce5dcc4d33b0fc067112783c0, NAME => 'testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:26,315 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:26,315 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:26,315 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:26,315 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:26,316 INFO [StoreOpener-5590a98ce5dcc4d33b0fc067112783c0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:26,317 DEBUG [StoreOpener-5590a98ce5dcc4d33b0fc067112783c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0/tr 2023-07-12 20:18:26,318 DEBUG [StoreOpener-5590a98ce5dcc4d33b0fc067112783c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0/tr 2023-07-12 20:18:26,318 INFO [StoreOpener-5590a98ce5dcc4d33b0fc067112783c0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5590a98ce5dcc4d33b0fc067112783c0 columnFamilyName tr 2023-07-12 20:18:26,319 INFO [StoreOpener-5590a98ce5dcc4d33b0fc067112783c0-1] regionserver.HStore(310): Store=5590a98ce5dcc4d33b0fc067112783c0/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:26,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:26,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:26,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:26,325 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5590a98ce5dcc4d33b0fc067112783c0; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10642721600, jitterRate=-0.00881931185722351}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:26,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5590a98ce5dcc4d33b0fc067112783c0: 2023-07-12 20:18:26,325 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0., pid=122, masterSystemTime=1689193106311 2023-07-12 20:18:26,327 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:26,327 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:26,327 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=5590a98ce5dcc4d33b0fc067112783c0, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:26,327 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689193106327"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193106327"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193106327"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193106327"}]},"ts":"1689193106327"} 2023-07-12 20:18:26,330 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-12 20:18:26,330 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure 5590a98ce5dcc4d33b0fc067112783c0, server=jenkins-hbase4.apache.org,41567,1689193085044 in 170 msec 2023-07-12 20:18:26,331 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=5590a98ce5dcc4d33b0fc067112783c0, REOPEN/MOVE in 492 msec 2023-07-12 20:18:26,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-12 20:18:26,839 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-12 20:18:26,839 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:26,842 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:26,842 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:26,844 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:26,845 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-12 20:18:26,845 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 20:18:26,846 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-12 20:18:26,846 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:26,846 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-12 20:18:26,846 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 20:18:26,847 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:26,847 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:26,848 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-12 20:18:26,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 20:18:26,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 20:18:26,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:26,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:26,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 20:18:26,854 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:26,857 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:26,857 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:26,859 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43429] to rsgroup normal 2023-07-12 20:18:26,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 20:18:26,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 20:18:26,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:26,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:26,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 20:18:26,864 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 20:18:26,864 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43429,1689193089109] are moved back to default 2023-07-12 20:18:26,864 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-12 20:18:26,864 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:26,866 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:26,866 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:26,868 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-12 20:18:26,868 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:26,869 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:26,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-12 20:18:26,872 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:26,872 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 123 2023-07-12 20:18:26,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-12 20:18:26,874 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 20:18:26,874 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 20:18:26,874 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:26,875 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:26,875 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 20:18:26,885 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 20:18:26,887 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/unmovedTable/6777c5b3891de176411b89338412bae7 2023-07-12 20:18:26,887 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/unmovedTable/6777c5b3891de176411b89338412bae7 empty. 2023-07-12 20:18:26,888 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/unmovedTable/6777c5b3891de176411b89338412bae7 2023-07-12 20:18:26,888 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-12 20:18:26,902 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:26,904 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6777c5b3891de176411b89338412bae7, NAME => 'unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:26,915 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:26,915 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 6777c5b3891de176411b89338412bae7, disabling compactions & flushes 2023-07-12 20:18:26,915 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:26,915 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:26,915 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. after waiting 0 ms 2023-07-12 20:18:26,915 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:26,915 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:26,915 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 6777c5b3891de176411b89338412bae7: 2023-07-12 20:18:26,918 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 20:18:26,919 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689193106919"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193106919"}]},"ts":"1689193106919"} 2023-07-12 20:18:26,920 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 20:18:26,921 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 20:18:26,921 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193106921"}]},"ts":"1689193106921"} 2023-07-12 20:18:26,922 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-12 20:18:26,926 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=6777c5b3891de176411b89338412bae7, ASSIGN}] 2023-07-12 20:18:26,927 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=6777c5b3891de176411b89338412bae7, ASSIGN 2023-07-12 20:18:26,928 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=6777c5b3891de176411b89338412bae7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46283,1689193085424; forceNewPlan=false, retain=false 2023-07-12 20:18:26,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-12 20:18:27,080 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=6777c5b3891de176411b89338412bae7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:27,080 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689193107080"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193107080"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193107080"}]},"ts":"1689193107080"} 2023-07-12 20:18:27,082 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=124, state=RUNNABLE; OpenRegionProcedure 6777c5b3891de176411b89338412bae7, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:27,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-12 20:18:27,238 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:27,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6777c5b3891de176411b89338412bae7, NAME => 'unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:27,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:27,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:27,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:27,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:27,240 INFO [StoreOpener-6777c5b3891de176411b89338412bae7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:27,242 DEBUG [StoreOpener-6777c5b3891de176411b89338412bae7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/unmovedTable/6777c5b3891de176411b89338412bae7/ut 2023-07-12 20:18:27,242 DEBUG [StoreOpener-6777c5b3891de176411b89338412bae7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/unmovedTable/6777c5b3891de176411b89338412bae7/ut 2023-07-12 20:18:27,242 INFO [StoreOpener-6777c5b3891de176411b89338412bae7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6777c5b3891de176411b89338412bae7 columnFamilyName ut 2023-07-12 20:18:27,243 INFO [StoreOpener-6777c5b3891de176411b89338412bae7-1] regionserver.HStore(310): Store=6777c5b3891de176411b89338412bae7/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:27,243 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/unmovedTable/6777c5b3891de176411b89338412bae7 2023-07-12 20:18:27,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/unmovedTable/6777c5b3891de176411b89338412bae7 2023-07-12 20:18:27,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:27,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/unmovedTable/6777c5b3891de176411b89338412bae7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:27,249 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6777c5b3891de176411b89338412bae7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10881828000, jitterRate=0.013449206948280334}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:27,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6777c5b3891de176411b89338412bae7: 2023-07-12 20:18:27,250 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7., pid=125, masterSystemTime=1689193107233 2023-07-12 20:18:27,251 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:27,251 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:27,252 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=6777c5b3891de176411b89338412bae7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:27,252 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689193107252"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193107252"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193107252"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193107252"}]},"ts":"1689193107252"} 2023-07-12 20:18:27,255 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=124 2023-07-12 20:18:27,255 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=124, state=SUCCESS; OpenRegionProcedure 6777c5b3891de176411b89338412bae7, server=jenkins-hbase4.apache.org,46283,1689193085424 in 171 msec 2023-07-12 20:18:27,257 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-12 20:18:27,257 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=6777c5b3891de176411b89338412bae7, ASSIGN in 329 msec 2023-07-12 20:18:27,257 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 20:18:27,257 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193107257"}]},"ts":"1689193107257"} 2023-07-12 20:18:27,258 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-12 20:18:27,261 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 20:18:27,262 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; CreateTableProcedure table=unmovedTable in 392 msec 2023-07-12 20:18:27,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-12 20:18:27,477 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 123 completed 2023-07-12 20:18:27,477 DEBUG [Listener at localhost/36071] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-12 20:18:27,477 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:27,481 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-12 20:18:27,481 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:27,481 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-12 20:18:27,483 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-12 20:18:27,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 20:18:27,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 20:18:27,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:27,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:27,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 20:18:27,488 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-12 20:18:27,489 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(345): Moving region 6777c5b3891de176411b89338412bae7 to RSGroup normal 2023-07-12 20:18:27,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=6777c5b3891de176411b89338412bae7, REOPEN/MOVE 2023-07-12 20:18:27,489 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-12 20:18:27,489 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=6777c5b3891de176411b89338412bae7, REOPEN/MOVE 2023-07-12 20:18:27,490 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=6777c5b3891de176411b89338412bae7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:27,490 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689193107490"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193107490"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193107490"}]},"ts":"1689193107490"} 2023-07-12 20:18:27,491 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 6777c5b3891de176411b89338412bae7, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:27,644 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:27,646 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6777c5b3891de176411b89338412bae7, disabling compactions & flushes 2023-07-12 20:18:27,646 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:27,646 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:27,646 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. after waiting 0 ms 2023-07-12 20:18:27,646 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:27,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/unmovedTable/6777c5b3891de176411b89338412bae7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:27,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:27,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6777c5b3891de176411b89338412bae7: 2023-07-12 20:18:27,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 6777c5b3891de176411b89338412bae7 move to jenkins-hbase4.apache.org,43429,1689193089109 record at close sequenceid=2 2023-07-12 20:18:27,653 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:27,654 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=6777c5b3891de176411b89338412bae7, regionState=CLOSED 2023-07-12 20:18:27,654 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689193107654"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193107654"}]},"ts":"1689193107654"} 2023-07-12 20:18:27,658 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-12 20:18:27,658 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 6777c5b3891de176411b89338412bae7, server=jenkins-hbase4.apache.org,46283,1689193085424 in 164 msec 2023-07-12 20:18:27,660 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=6777c5b3891de176411b89338412bae7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43429,1689193089109; forceNewPlan=false, retain=false 2023-07-12 20:18:27,810 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=6777c5b3891de176411b89338412bae7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:27,811 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689193107810"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193107810"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193107810"}]},"ts":"1689193107810"} 2023-07-12 20:18:27,812 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 6777c5b3891de176411b89338412bae7, server=jenkins-hbase4.apache.org,43429,1689193089109}] 2023-07-12 20:18:27,976 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:27,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6777c5b3891de176411b89338412bae7, NAME => 'unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:27,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:27,977 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:27,977 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:27,977 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:27,978 INFO [StoreOpener-6777c5b3891de176411b89338412bae7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:27,979 DEBUG [StoreOpener-6777c5b3891de176411b89338412bae7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/unmovedTable/6777c5b3891de176411b89338412bae7/ut 2023-07-12 20:18:27,979 DEBUG [StoreOpener-6777c5b3891de176411b89338412bae7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/unmovedTable/6777c5b3891de176411b89338412bae7/ut 2023-07-12 20:18:27,979 INFO [StoreOpener-6777c5b3891de176411b89338412bae7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6777c5b3891de176411b89338412bae7 columnFamilyName ut 2023-07-12 20:18:27,980 INFO [StoreOpener-6777c5b3891de176411b89338412bae7-1] regionserver.HStore(310): Store=6777c5b3891de176411b89338412bae7/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:27,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/unmovedTable/6777c5b3891de176411b89338412bae7 2023-07-12 20:18:27,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/unmovedTable/6777c5b3891de176411b89338412bae7 2023-07-12 20:18:27,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:27,986 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6777c5b3891de176411b89338412bae7; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11359037120, jitterRate=0.05789276957511902}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:27,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6777c5b3891de176411b89338412bae7: 2023-07-12 20:18:27,986 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7., pid=128, masterSystemTime=1689193107969 2023-07-12 20:18:27,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:27,988 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:27,988 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=6777c5b3891de176411b89338412bae7, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:27,989 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689193107988"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193107988"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193107988"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193107988"}]},"ts":"1689193107988"} 2023-07-12 20:18:27,992 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-12 20:18:27,992 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 6777c5b3891de176411b89338412bae7, server=jenkins-hbase4.apache.org,43429,1689193089109 in 178 msec 2023-07-12 20:18:27,993 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=6777c5b3891de176411b89338412bae7, REOPEN/MOVE in 503 msec 2023-07-12 20:18:28,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-12 20:18:28,490 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-12 20:18:28,490 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:28,493 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:28,494 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:28,496 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:28,497 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 20:18:28,497 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 20:18:28,498 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-12 20:18:28,498 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:28,499 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 20:18:28,499 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 20:18:28,499 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-12 20:18:28,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 20:18:28,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:28,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:28,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 20:18:28,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-12 20:18:28,507 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-12 20:18:28,512 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:28,512 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:28,515 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-12 20:18:28,515 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:28,516 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-12 20:18:28,516 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 20:18:28,518 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 20:18:28,518 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 20:18:28,523 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:28,523 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:28,525 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-12 20:18:28,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 20:18:28,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:28,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:28,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 20:18:28,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 20:18:28,537 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-12 20:18:28,538 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(345): Moving region 6777c5b3891de176411b89338412bae7 to RSGroup default 2023-07-12 20:18:28,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=6777c5b3891de176411b89338412bae7, REOPEN/MOVE 2023-07-12 20:18:28,539 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 20:18:28,539 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=6777c5b3891de176411b89338412bae7, REOPEN/MOVE 2023-07-12 20:18:28,539 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=6777c5b3891de176411b89338412bae7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:28,540 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689193108539"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193108539"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193108539"}]},"ts":"1689193108539"} 2023-07-12 20:18:28,541 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 6777c5b3891de176411b89338412bae7, server=jenkins-hbase4.apache.org,43429,1689193089109}] 2023-07-12 20:18:28,694 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:28,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6777c5b3891de176411b89338412bae7, disabling compactions & flushes 2023-07-12 20:18:28,695 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:28,696 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:28,696 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. after waiting 0 ms 2023-07-12 20:18:28,696 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:28,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/unmovedTable/6777c5b3891de176411b89338412bae7/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 20:18:28,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:28,700 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6777c5b3891de176411b89338412bae7: 2023-07-12 20:18:28,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 6777c5b3891de176411b89338412bae7 move to jenkins-hbase4.apache.org,46283,1689193085424 record at close sequenceid=5 2023-07-12 20:18:28,701 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:28,702 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=6777c5b3891de176411b89338412bae7, regionState=CLOSED 2023-07-12 20:18:28,702 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689193108702"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193108702"}]},"ts":"1689193108702"} 2023-07-12 20:18:28,704 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-12 20:18:28,704 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 6777c5b3891de176411b89338412bae7, server=jenkins-hbase4.apache.org,43429,1689193089109 in 162 msec 2023-07-12 20:18:28,705 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=6777c5b3891de176411b89338412bae7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46283,1689193085424; forceNewPlan=false, retain=false 2023-07-12 20:18:28,855 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=6777c5b3891de176411b89338412bae7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:28,855 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689193108855"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193108855"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193108855"}]},"ts":"1689193108855"} 2023-07-12 20:18:28,857 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 6777c5b3891de176411b89338412bae7, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:28,895 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 20:18:29,013 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:29,013 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6777c5b3891de176411b89338412bae7, NAME => 'unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:29,013 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:29,014 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:29,014 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:29,014 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:29,015 INFO [StoreOpener-6777c5b3891de176411b89338412bae7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:29,018 DEBUG [StoreOpener-6777c5b3891de176411b89338412bae7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/unmovedTable/6777c5b3891de176411b89338412bae7/ut 2023-07-12 20:18:29,018 DEBUG [StoreOpener-6777c5b3891de176411b89338412bae7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/unmovedTable/6777c5b3891de176411b89338412bae7/ut 2023-07-12 20:18:29,019 INFO [StoreOpener-6777c5b3891de176411b89338412bae7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6777c5b3891de176411b89338412bae7 columnFamilyName ut 2023-07-12 20:18:29,019 INFO [StoreOpener-6777c5b3891de176411b89338412bae7-1] regionserver.HStore(310): Store=6777c5b3891de176411b89338412bae7/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:29,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/unmovedTable/6777c5b3891de176411b89338412bae7 2023-07-12 20:18:29,022 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/unmovedTable/6777c5b3891de176411b89338412bae7 2023-07-12 20:18:29,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:29,026 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6777c5b3891de176411b89338412bae7; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11802857920, jitterRate=0.09922680258750916}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:29,027 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6777c5b3891de176411b89338412bae7: 2023-07-12 20:18:29,027 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7., pid=131, masterSystemTime=1689193109009 2023-07-12 20:18:29,029 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:29,029 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:29,030 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=6777c5b3891de176411b89338412bae7, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:29,030 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689193109030"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193109030"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193109030"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193109030"}]},"ts":"1689193109030"} 2023-07-12 20:18:29,033 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-12 20:18:29,034 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 6777c5b3891de176411b89338412bae7, server=jenkins-hbase4.apache.org,46283,1689193085424 in 174 msec 2023-07-12 20:18:29,035 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=6777c5b3891de176411b89338412bae7, REOPEN/MOVE in 496 msec 2023-07-12 20:18:29,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-12 20:18:29,539 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-12 20:18:29,539 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:29,540 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43429] to rsgroup default 2023-07-12 20:18:29,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 20:18:29,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:29,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:29,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 20:18:29,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 20:18:29,549 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-12 20:18:29,549 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43429,1689193089109] are moved back to normal 2023-07-12 20:18:29,549 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-12 20:18:29,549 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:29,550 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-12 20:18:29,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:29,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:29,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 20:18:29,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 20:18:29,556 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:29,556 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:29,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:29,556 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:29,557 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:29,557 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:29,557 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:29,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:29,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 20:18:29,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 20:18:29,562 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:29,564 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-12 20:18:29,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:29,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 20:18:29,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:29,575 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-12 20:18:29,575 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(345): Moving region 5590a98ce5dcc4d33b0fc067112783c0 to RSGroup default 2023-07-12 20:18:29,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=5590a98ce5dcc4d33b0fc067112783c0, REOPEN/MOVE 2023-07-12 20:18:29,576 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 20:18:29,576 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=5590a98ce5dcc4d33b0fc067112783c0, REOPEN/MOVE 2023-07-12 20:18:29,576 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=5590a98ce5dcc4d33b0fc067112783c0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:29,576 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689193109576"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193109576"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193109576"}]},"ts":"1689193109576"} 2023-07-12 20:18:29,578 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE; CloseRegionProcedure 5590a98ce5dcc4d33b0fc067112783c0, server=jenkins-hbase4.apache.org,41567,1689193085044}] 2023-07-12 20:18:29,730 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:29,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5590a98ce5dcc4d33b0fc067112783c0, disabling compactions & flushes 2023-07-12 20:18:29,732 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:29,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:29,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. after waiting 0 ms 2023-07-12 20:18:29,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:29,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 20:18:29,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:29,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5590a98ce5dcc4d33b0fc067112783c0: 2023-07-12 20:18:29,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5590a98ce5dcc4d33b0fc067112783c0 move to jenkins-hbase4.apache.org,43429,1689193089109 record at close sequenceid=5 2023-07-12 20:18:29,739 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:29,739 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=5590a98ce5dcc4d33b0fc067112783c0, regionState=CLOSED 2023-07-12 20:18:29,739 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689193109739"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193109739"}]},"ts":"1689193109739"} 2023-07-12 20:18:29,742 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=132 2023-07-12 20:18:29,742 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; CloseRegionProcedure 5590a98ce5dcc4d33b0fc067112783c0, server=jenkins-hbase4.apache.org,41567,1689193085044 in 162 msec 2023-07-12 20:18:29,742 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=5590a98ce5dcc4d33b0fc067112783c0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43429,1689193089109; forceNewPlan=false, retain=false 2023-07-12 20:18:29,893 INFO [jenkins-hbase4:42533] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 20:18:29,893 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=5590a98ce5dcc4d33b0fc067112783c0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:29,893 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689193109893"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193109893"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193109893"}]},"ts":"1689193109893"} 2023-07-12 20:18:29,895 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=132, state=RUNNABLE; OpenRegionProcedure 5590a98ce5dcc4d33b0fc067112783c0, server=jenkins-hbase4.apache.org,43429,1689193089109}] 2023-07-12 20:18:30,051 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:30,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5590a98ce5dcc4d33b0fc067112783c0, NAME => 'testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:30,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:30,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:30,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:30,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:30,053 INFO [StoreOpener-5590a98ce5dcc4d33b0fc067112783c0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:30,054 DEBUG [StoreOpener-5590a98ce5dcc4d33b0fc067112783c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0/tr 2023-07-12 20:18:30,054 DEBUG [StoreOpener-5590a98ce5dcc4d33b0fc067112783c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0/tr 2023-07-12 20:18:30,055 INFO [StoreOpener-5590a98ce5dcc4d33b0fc067112783c0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5590a98ce5dcc4d33b0fc067112783c0 columnFamilyName tr 2023-07-12 20:18:30,056 INFO [StoreOpener-5590a98ce5dcc4d33b0fc067112783c0-1] regionserver.HStore(310): Store=5590a98ce5dcc4d33b0fc067112783c0/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:30,057 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:30,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:30,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:30,063 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5590a98ce5dcc4d33b0fc067112783c0; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10777310400, jitterRate=0.0037152469158172607}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:30,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5590a98ce5dcc4d33b0fc067112783c0: 2023-07-12 20:18:30,063 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0., pid=134, masterSystemTime=1689193110046 2023-07-12 20:18:30,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:30,065 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:30,065 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=5590a98ce5dcc4d33b0fc067112783c0, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:30,065 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689193110065"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193110065"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193110065"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193110065"}]},"ts":"1689193110065"} 2023-07-12 20:18:30,068 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-12 20:18:30,068 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; OpenRegionProcedure 5590a98ce5dcc4d33b0fc067112783c0, server=jenkins-hbase4.apache.org,43429,1689193089109 in 171 msec 2023-07-12 20:18:30,069 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=5590a98ce5dcc4d33b0fc067112783c0, REOPEN/MOVE in 493 msec 2023-07-12 20:18:30,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure.ProcedureSyncWait(216): waitFor pid=132 2023-07-12 20:18:30,576 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-12 20:18:30,576 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:30,577 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567] to rsgroup default 2023-07-12 20:18:30,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:30,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 20:18:30,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:30,581 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-12 20:18:30,581 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39187,1689193085232, jenkins-hbase4.apache.org,41567,1689193085044] are moved back to newgroup 2023-07-12 20:18:30,581 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-12 20:18:30,582 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:30,582 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-12 20:18:30,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:30,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:30,588 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:30,591 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:30,592 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:30,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:30,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:30,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:30,606 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:30,608 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:30,609 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:30,610 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42533] to rsgroup master 2023-07-12 20:18:30,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:30,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 763 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46566 deadline: 1689194310610, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. 2023-07-12 20:18:30,611 WARN [Listener at localhost/36071] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:30,613 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:30,613 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:30,613 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:30,613 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:46283], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:30,614 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:30,614 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:30,634 INFO [Listener at localhost/36071] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=499 (was 502), OpenFileDescriptor=752 (was 763), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=581 (was 605), ProcessCount=173 (was 173), AvailableMemoryMB=4167 (was 4282) 2023-07-12 20:18:30,653 INFO [Listener at localhost/36071] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=499, OpenFileDescriptor=752, MaxFileDescriptor=60000, SystemLoadAverage=581, ProcessCount=173, AvailableMemoryMB=4171 2023-07-12 20:18:30,653 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-12 20:18:30,659 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:30,659 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:30,660 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:30,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:30,660 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:30,662 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:30,662 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:30,663 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:30,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:30,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:30,673 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:30,676 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:30,677 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:30,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:30,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:30,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:30,686 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:30,690 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:30,690 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:30,693 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42533] to rsgroup master 2023-07-12 20:18:30,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:30,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 791 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46566 deadline: 1689194310693, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. 2023-07-12 20:18:30,694 WARN [Listener at localhost/36071] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:30,696 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:30,697 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:30,697 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:30,697 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:46283], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:30,699 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:30,699 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:30,700 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-12 20:18:30,700 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 20:18:30,707 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-12 20:18:30,707 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-12 20:18:30,708 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-12 20:18:30,708 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:30,709 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-12 20:18:30,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:30,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 803 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:46566 deadline: 1689194310709, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-12 20:18:30,712 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-12 20:18:30,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:30,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 806 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:46566 deadline: 1689194310711, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-12 20:18:30,716 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-12 20:18:30,716 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-12 20:18:30,722 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-12 20:18:30,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:30,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 810 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:46566 deadline: 1689194310722, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-12 20:18:30,727 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:30,727 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:30,728 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:30,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:30,728 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:30,729 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:30,729 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:30,730 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:30,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:30,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:30,735 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:30,738 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:30,739 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:30,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:30,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:30,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:30,744 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:30,747 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:30,747 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:30,749 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42533] to rsgroup master 2023-07-12 20:18:30,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:30,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 834 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46566 deadline: 1689194310749, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. 2023-07-12 20:18:30,752 WARN [Listener at localhost/36071] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:30,754 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:30,754 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:30,755 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:30,755 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:46283], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:30,756 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:30,756 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:30,778 INFO [Listener at localhost/36071] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=503 (was 499) Potentially hanging thread: hconnection-0x5275ffcd-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x292363c-shared-pool-29 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x292363c-shared-pool-30 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5275ffcd-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=752 (was 752), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=558 (was 581), ProcessCount=173 (was 173), AvailableMemoryMB=4178 (was 4171) - AvailableMemoryMB LEAK? - 2023-07-12 20:18:30,778 WARN [Listener at localhost/36071] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-12 20:18:30,797 INFO [Listener at localhost/36071] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=503, OpenFileDescriptor=752, MaxFileDescriptor=60000, SystemLoadAverage=558, ProcessCount=173, AvailableMemoryMB=4178 2023-07-12 20:18:30,797 WARN [Listener at localhost/36071] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-12 20:18:30,797 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-12 20:18:30,804 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:30,804 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:30,805 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:30,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:30,805 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:30,806 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:30,806 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:30,806 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:30,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:30,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:30,813 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:30,815 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:30,816 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:30,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:30,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:30,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:30,822 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:30,825 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:30,825 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:30,827 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42533] to rsgroup master 2023-07-12 20:18:30,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:30,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 862 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46566 deadline: 1689194310827, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. 2023-07-12 20:18:30,828 WARN [Listener at localhost/36071] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:30,830 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:30,830 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:30,831 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:30,831 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:46283], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:30,832 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:30,832 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:30,836 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:30,836 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:30,837 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1175230447 2023-07-12 20:18:30,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1175230447 2023-07-12 20:18:30,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:30,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:30,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:30,847 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:30,850 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:30,850 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:30,853 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567] to rsgroup Group_testDisabledTableMove_1175230447 2023-07-12 20:18:30,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1175230447 2023-07-12 20:18:30,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:30,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:30,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:30,864 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 20:18:30,864 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39187,1689193085232, jenkins-hbase4.apache.org,41567,1689193085044] are moved back to default 2023-07-12 20:18:30,864 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1175230447 2023-07-12 20:18:30,865 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:30,870 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:30,870 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:30,875 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1175230447 2023-07-12 20:18:30,875 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:30,877 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:30,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-12 20:18:30,881 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:30,882 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 135 2023-07-12 20:18:30,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-12 20:18:30,884 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1175230447 2023-07-12 20:18:30,885 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:30,885 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:30,886 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:30,892 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 20:18:30,897 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/82ce5906f32202c904911581de608a8f 2023-07-12 20:18:30,897 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/f26b05a722607b99fd58a04e0b0310d9 2023-07-12 20:18:30,897 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/cc3b1a28541b01b2b9ef8e5b6f0aec1e 2023-07-12 20:18:30,897 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/cfb1db1a1d15d8a5d04ca8c8fa722e61 2023-07-12 20:18:30,897 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/6bb5331df9ba6c48590d8c388b7b18f5 2023-07-12 20:18:30,898 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/82ce5906f32202c904911581de608a8f empty. 2023-07-12 20:18:30,898 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/cc3b1a28541b01b2b9ef8e5b6f0aec1e empty. 2023-07-12 20:18:30,898 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/f26b05a722607b99fd58a04e0b0310d9 empty. 2023-07-12 20:18:30,898 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/6bb5331df9ba6c48590d8c388b7b18f5 empty. 2023-07-12 20:18:30,898 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/82ce5906f32202c904911581de608a8f 2023-07-12 20:18:30,898 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/cfb1db1a1d15d8a5d04ca8c8fa722e61 empty. 2023-07-12 20:18:30,899 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/f26b05a722607b99fd58a04e0b0310d9 2023-07-12 20:18:30,899 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/cc3b1a28541b01b2b9ef8e5b6f0aec1e 2023-07-12 20:18:30,899 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/6bb5331df9ba6c48590d8c388b7b18f5 2023-07-12 20:18:30,899 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/cfb1db1a1d15d8a5d04ca8c8fa722e61 2023-07-12 20:18:30,899 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-12 20:18:30,946 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:30,947 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 82ce5906f32202c904911581de608a8f, NAME => 'Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:30,948 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => f26b05a722607b99fd58a04e0b0310d9, NAME => 'Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:30,949 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 6bb5331df9ba6c48590d8c388b7b18f5, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:30,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-12 20:18:31,009 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:31,009 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 82ce5906f32202c904911581de608a8f, disabling compactions & flushes 2023-07-12 20:18:31,010 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f. 2023-07-12 20:18:31,010 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f. 2023-07-12 20:18:31,010 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f. after waiting 0 ms 2023-07-12 20:18:31,010 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f. 2023-07-12 20:18:31,010 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f. 2023-07-12 20:18:31,010 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 82ce5906f32202c904911581de608a8f: 2023-07-12 20:18:31,010 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:31,010 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 6bb5331df9ba6c48590d8c388b7b18f5, disabling compactions & flushes 2023-07-12 20:18:31,010 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => cc3b1a28541b01b2b9ef8e5b6f0aec1e, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:31,010 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5. 2023-07-12 20:18:31,011 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5. 2023-07-12 20:18:31,011 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5. after waiting 0 ms 2023-07-12 20:18:31,011 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5. 2023-07-12 20:18:31,011 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5. 2023-07-12 20:18:31,011 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 6bb5331df9ba6c48590d8c388b7b18f5: 2023-07-12 20:18:31,011 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => cfb1db1a1d15d8a5d04ca8c8fa722e61, NAME => 'Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp 2023-07-12 20:18:31,013 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:31,013 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing f26b05a722607b99fd58a04e0b0310d9, disabling compactions & flushes 2023-07-12 20:18:31,013 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9. 2023-07-12 20:18:31,013 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9. 2023-07-12 20:18:31,013 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9. after waiting 0 ms 2023-07-12 20:18:31,013 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9. 2023-07-12 20:18:31,013 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9. 2023-07-12 20:18:31,013 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for f26b05a722607b99fd58a04e0b0310d9: 2023-07-12 20:18:31,021 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:31,022 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing cc3b1a28541b01b2b9ef8e5b6f0aec1e, disabling compactions & flushes 2023-07-12 20:18:31,022 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e. 2023-07-12 20:18:31,022 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e. 2023-07-12 20:18:31,022 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e. after waiting 0 ms 2023-07-12 20:18:31,022 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e. 2023-07-12 20:18:31,022 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e. 2023-07-12 20:18:31,022 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for cc3b1a28541b01b2b9ef8e5b6f0aec1e: 2023-07-12 20:18:31,022 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:31,022 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing cfb1db1a1d15d8a5d04ca8c8fa722e61, disabling compactions & flushes 2023-07-12 20:18:31,022 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61. 2023-07-12 20:18:31,022 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61. 2023-07-12 20:18:31,022 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61. after waiting 0 ms 2023-07-12 20:18:31,022 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61. 2023-07-12 20:18:31,022 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61. 2023-07-12 20:18:31,022 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for cfb1db1a1d15d8a5d04ca8c8fa722e61: 2023-07-12 20:18:31,025 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 20:18:31,026 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689193111026"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193111026"}]},"ts":"1689193111026"} 2023-07-12 20:18:31,026 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689193111026"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193111026"}]},"ts":"1689193111026"} 2023-07-12 20:18:31,026 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689193111026"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193111026"}]},"ts":"1689193111026"} 2023-07-12 20:18:31,026 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689193111026"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193111026"}]},"ts":"1689193111026"} 2023-07-12 20:18:31,026 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689193111026"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193111026"}]},"ts":"1689193111026"} 2023-07-12 20:18:31,028 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 20:18:31,029 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 20:18:31,029 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193111029"}]},"ts":"1689193111029"} 2023-07-12 20:18:31,032 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-12 20:18:31,036 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:31,036 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:31,036 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:31,036 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:31,037 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=82ce5906f32202c904911581de608a8f, ASSIGN}, {pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f26b05a722607b99fd58a04e0b0310d9, ASSIGN}, {pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6bb5331df9ba6c48590d8c388b7b18f5, ASSIGN}, {pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cc3b1a28541b01b2b9ef8e5b6f0aec1e, ASSIGN}, {pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cfb1db1a1d15d8a5d04ca8c8fa722e61, ASSIGN}] 2023-07-12 20:18:31,039 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cc3b1a28541b01b2b9ef8e5b6f0aec1e, ASSIGN 2023-07-12 20:18:31,039 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6bb5331df9ba6c48590d8c388b7b18f5, ASSIGN 2023-07-12 20:18:31,039 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f26b05a722607b99fd58a04e0b0310d9, ASSIGN 2023-07-12 20:18:31,040 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cfb1db1a1d15d8a5d04ca8c8fa722e61, ASSIGN 2023-07-12 20:18:31,040 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=82ce5906f32202c904911581de608a8f, ASSIGN 2023-07-12 20:18:31,040 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cc3b1a28541b01b2b9ef8e5b6f0aec1e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43429,1689193089109; forceNewPlan=false, retain=false 2023-07-12 20:18:31,040 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6bb5331df9ba6c48590d8c388b7b18f5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43429,1689193089109; forceNewPlan=false, retain=false 2023-07-12 20:18:31,040 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cfb1db1a1d15d8a5d04ca8c8fa722e61, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46283,1689193085424; forceNewPlan=false, retain=false 2023-07-12 20:18:31,040 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f26b05a722607b99fd58a04e0b0310d9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46283,1689193085424; forceNewPlan=false, retain=false 2023-07-12 20:18:31,042 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=82ce5906f32202c904911581de608a8f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46283,1689193085424; forceNewPlan=false, retain=false 2023-07-12 20:18:31,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-12 20:18:31,191 INFO [jenkins-hbase4:42533] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 20:18:31,195 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=cfb1db1a1d15d8a5d04ca8c8fa722e61, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:31,195 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=6bb5331df9ba6c48590d8c388b7b18f5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:31,195 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=82ce5906f32202c904911581de608a8f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:31,195 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=f26b05a722607b99fd58a04e0b0310d9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:31,195 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689193111195"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193111195"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193111195"}]},"ts":"1689193111195"} 2023-07-12 20:18:31,196 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689193111195"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193111195"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193111195"}]},"ts":"1689193111195"} 2023-07-12 20:18:31,195 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=cc3b1a28541b01b2b9ef8e5b6f0aec1e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:31,195 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689193111195"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193111195"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193111195"}]},"ts":"1689193111195"} 2023-07-12 20:18:31,196 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689193111195"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193111195"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193111195"}]},"ts":"1689193111195"} 2023-07-12 20:18:31,195 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689193111195"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193111195"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193111195"}]},"ts":"1689193111195"} 2023-07-12 20:18:31,197 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=136, state=RUNNABLE; OpenRegionProcedure 82ce5906f32202c904911581de608a8f, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:31,197 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=137, state=RUNNABLE; OpenRegionProcedure f26b05a722607b99fd58a04e0b0310d9, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:31,198 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=138, state=RUNNABLE; OpenRegionProcedure 6bb5331df9ba6c48590d8c388b7b18f5, server=jenkins-hbase4.apache.org,43429,1689193089109}] 2023-07-12 20:18:31,200 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=139, state=RUNNABLE; OpenRegionProcedure cc3b1a28541b01b2b9ef8e5b6f0aec1e, server=jenkins-hbase4.apache.org,43429,1689193089109}] 2023-07-12 20:18:31,203 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=140, state=RUNNABLE; OpenRegionProcedure cfb1db1a1d15d8a5d04ca8c8fa722e61, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:31,353 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9. 2023-07-12 20:18:31,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f26b05a722607b99fd58a04e0b0310d9, NAME => 'Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 20:18:31,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove f26b05a722607b99fd58a04e0b0310d9 2023-07-12 20:18:31,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:31,354 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f26b05a722607b99fd58a04e0b0310d9 2023-07-12 20:18:31,354 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f26b05a722607b99fd58a04e0b0310d9 2023-07-12 20:18:31,355 INFO [StoreOpener-f26b05a722607b99fd58a04e0b0310d9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f26b05a722607b99fd58a04e0b0310d9 2023-07-12 20:18:31,356 DEBUG [StoreOpener-f26b05a722607b99fd58a04e0b0310d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/f26b05a722607b99fd58a04e0b0310d9/f 2023-07-12 20:18:31,356 DEBUG [StoreOpener-f26b05a722607b99fd58a04e0b0310d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/f26b05a722607b99fd58a04e0b0310d9/f 2023-07-12 20:18:31,357 INFO [StoreOpener-f26b05a722607b99fd58a04e0b0310d9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f26b05a722607b99fd58a04e0b0310d9 columnFamilyName f 2023-07-12 20:18:31,357 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5. 2023-07-12 20:18:31,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6bb5331df9ba6c48590d8c388b7b18f5, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 20:18:31,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 6bb5331df9ba6c48590d8c388b7b18f5 2023-07-12 20:18:31,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:31,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6bb5331df9ba6c48590d8c388b7b18f5 2023-07-12 20:18:31,357 INFO [StoreOpener-f26b05a722607b99fd58a04e0b0310d9-1] regionserver.HStore(310): Store=f26b05a722607b99fd58a04e0b0310d9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:31,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6bb5331df9ba6c48590d8c388b7b18f5 2023-07-12 20:18:31,358 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/f26b05a722607b99fd58a04e0b0310d9 2023-07-12 20:18:31,359 INFO [StoreOpener-6bb5331df9ba6c48590d8c388b7b18f5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6bb5331df9ba6c48590d8c388b7b18f5 2023-07-12 20:18:31,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/f26b05a722607b99fd58a04e0b0310d9 2023-07-12 20:18:31,360 DEBUG [StoreOpener-6bb5331df9ba6c48590d8c388b7b18f5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/6bb5331df9ba6c48590d8c388b7b18f5/f 2023-07-12 20:18:31,360 DEBUG [StoreOpener-6bb5331df9ba6c48590d8c388b7b18f5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/6bb5331df9ba6c48590d8c388b7b18f5/f 2023-07-12 20:18:31,360 INFO [StoreOpener-6bb5331df9ba6c48590d8c388b7b18f5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6bb5331df9ba6c48590d8c388b7b18f5 columnFamilyName f 2023-07-12 20:18:31,361 INFO [StoreOpener-6bb5331df9ba6c48590d8c388b7b18f5-1] regionserver.HStore(310): Store=6bb5331df9ba6c48590d8c388b7b18f5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:31,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f26b05a722607b99fd58a04e0b0310d9 2023-07-12 20:18:31,362 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/6bb5331df9ba6c48590d8c388b7b18f5 2023-07-12 20:18:31,362 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/6bb5331df9ba6c48590d8c388b7b18f5 2023-07-12 20:18:31,363 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/f26b05a722607b99fd58a04e0b0310d9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:31,364 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f26b05a722607b99fd58a04e0b0310d9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10241904640, jitterRate=-0.04614830017089844}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:31,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f26b05a722607b99fd58a04e0b0310d9: 2023-07-12 20:18:31,365 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9., pid=142, masterSystemTime=1689193111349 2023-07-12 20:18:31,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6bb5331df9ba6c48590d8c388b7b18f5 2023-07-12 20:18:31,366 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9. 2023-07-12 20:18:31,366 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9. 2023-07-12 20:18:31,366 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f. 2023-07-12 20:18:31,366 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 82ce5906f32202c904911581de608a8f, NAME => 'Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 20:18:31,367 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 82ce5906f32202c904911581de608a8f 2023-07-12 20:18:31,367 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:31,367 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 82ce5906f32202c904911581de608a8f 2023-07-12 20:18:31,367 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 82ce5906f32202c904911581de608a8f 2023-07-12 20:18:31,367 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=f26b05a722607b99fd58a04e0b0310d9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:31,367 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689193111367"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193111367"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193111367"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193111367"}]},"ts":"1689193111367"} 2023-07-12 20:18:31,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/6bb5331df9ba6c48590d8c388b7b18f5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:31,368 INFO [StoreOpener-82ce5906f32202c904911581de608a8f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 82ce5906f32202c904911581de608a8f 2023-07-12 20:18:31,368 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6bb5331df9ba6c48590d8c388b7b18f5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9801375680, jitterRate=-0.08717575669288635}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:31,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6bb5331df9ba6c48590d8c388b7b18f5: 2023-07-12 20:18:31,369 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5., pid=143, masterSystemTime=1689193111354 2023-07-12 20:18:31,370 DEBUG [StoreOpener-82ce5906f32202c904911581de608a8f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/82ce5906f32202c904911581de608a8f/f 2023-07-12 20:18:31,370 DEBUG [StoreOpener-82ce5906f32202c904911581de608a8f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/82ce5906f32202c904911581de608a8f/f 2023-07-12 20:18:31,370 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5. 2023-07-12 20:18:31,370 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5. 2023-07-12 20:18:31,370 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e. 2023-07-12 20:18:31,370 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cc3b1a28541b01b2b9ef8e5b6f0aec1e, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 20:18:31,370 INFO [StoreOpener-82ce5906f32202c904911581de608a8f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 82ce5906f32202c904911581de608a8f columnFamilyName f 2023-07-12 20:18:31,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove cc3b1a28541b01b2b9ef8e5b6f0aec1e 2023-07-12 20:18:31,370 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=6bb5331df9ba6c48590d8c388b7b18f5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:31,371 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=137 2023-07-12 20:18:31,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:31,371 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=137, state=SUCCESS; OpenRegionProcedure f26b05a722607b99fd58a04e0b0310d9, server=jenkins-hbase4.apache.org,46283,1689193085424 in 172 msec 2023-07-12 20:18:31,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cc3b1a28541b01b2b9ef8e5b6f0aec1e 2023-07-12 20:18:31,371 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689193111370"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193111370"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193111370"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193111370"}]},"ts":"1689193111370"} 2023-07-12 20:18:31,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cc3b1a28541b01b2b9ef8e5b6f0aec1e 2023-07-12 20:18:31,371 INFO [StoreOpener-82ce5906f32202c904911581de608a8f-1] regionserver.HStore(310): Store=82ce5906f32202c904911581de608a8f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:31,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/82ce5906f32202c904911581de608a8f 2023-07-12 20:18:31,372 INFO [StoreOpener-cc3b1a28541b01b2b9ef8e5b6f0aec1e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cc3b1a28541b01b2b9ef8e5b6f0aec1e 2023-07-12 20:18:31,372 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f26b05a722607b99fd58a04e0b0310d9, ASSIGN in 334 msec 2023-07-12 20:18:31,374 DEBUG [StoreOpener-cc3b1a28541b01b2b9ef8e5b6f0aec1e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/cc3b1a28541b01b2b9ef8e5b6f0aec1e/f 2023-07-12 20:18:31,374 DEBUG [StoreOpener-cc3b1a28541b01b2b9ef8e5b6f0aec1e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/cc3b1a28541b01b2b9ef8e5b6f0aec1e/f 2023-07-12 20:18:31,374 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=138 2023-07-12 20:18:31,374 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=138, state=SUCCESS; OpenRegionProcedure 6bb5331df9ba6c48590d8c388b7b18f5, server=jenkins-hbase4.apache.org,43429,1689193089109 in 174 msec 2023-07-12 20:18:31,374 INFO [StoreOpener-cc3b1a28541b01b2b9ef8e5b6f0aec1e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cc3b1a28541b01b2b9ef8e5b6f0aec1e columnFamilyName f 2023-07-12 20:18:31,375 INFO [StoreOpener-cc3b1a28541b01b2b9ef8e5b6f0aec1e-1] regionserver.HStore(310): Store=cc3b1a28541b01b2b9ef8e5b6f0aec1e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:31,375 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6bb5331df9ba6c48590d8c388b7b18f5, ASSIGN in 337 msec 2023-07-12 20:18:31,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/cc3b1a28541b01b2b9ef8e5b6f0aec1e 2023-07-12 20:18:31,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/cc3b1a28541b01b2b9ef8e5b6f0aec1e 2023-07-12 20:18:31,378 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cc3b1a28541b01b2b9ef8e5b6f0aec1e 2023-07-12 20:18:31,378 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/82ce5906f32202c904911581de608a8f 2023-07-12 20:18:31,380 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/cc3b1a28541b01b2b9ef8e5b6f0aec1e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:31,381 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cc3b1a28541b01b2b9ef8e5b6f0aec1e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11291248000, jitterRate=0.051579415798187256}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:31,381 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cc3b1a28541b01b2b9ef8e5b6f0aec1e: 2023-07-12 20:18:31,381 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e., pid=144, masterSystemTime=1689193111354 2023-07-12 20:18:31,381 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 82ce5906f32202c904911581de608a8f 2023-07-12 20:18:31,382 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e. 2023-07-12 20:18:31,383 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e. 2023-07-12 20:18:31,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/82ce5906f32202c904911581de608a8f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:31,384 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=cc3b1a28541b01b2b9ef8e5b6f0aec1e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:31,384 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 82ce5906f32202c904911581de608a8f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10977507840, jitterRate=0.02236008644104004}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:31,384 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689193111384"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193111384"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193111384"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193111384"}]},"ts":"1689193111384"} 2023-07-12 20:18:31,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 82ce5906f32202c904911581de608a8f: 2023-07-12 20:18:31,385 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f., pid=141, masterSystemTime=1689193111349 2023-07-12 20:18:31,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f. 2023-07-12 20:18:31,386 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f. 2023-07-12 20:18:31,386 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61. 2023-07-12 20:18:31,386 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=82ce5906f32202c904911581de608a8f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:31,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cfb1db1a1d15d8a5d04ca8c8fa722e61, NAME => 'Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 20:18:31,386 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689193111386"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193111386"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193111386"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193111386"}]},"ts":"1689193111386"} 2023-07-12 20:18:31,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove cfb1db1a1d15d8a5d04ca8c8fa722e61 2023-07-12 20:18:31,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:31,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cfb1db1a1d15d8a5d04ca8c8fa722e61 2023-07-12 20:18:31,387 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=139 2023-07-12 20:18:31,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cfb1db1a1d15d8a5d04ca8c8fa722e61 2023-07-12 20:18:31,387 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=139, state=SUCCESS; OpenRegionProcedure cc3b1a28541b01b2b9ef8e5b6f0aec1e, server=jenkins-hbase4.apache.org,43429,1689193089109 in 185 msec 2023-07-12 20:18:31,388 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cc3b1a28541b01b2b9ef8e5b6f0aec1e, ASSIGN in 350 msec 2023-07-12 20:18:31,388 INFO [StoreOpener-cfb1db1a1d15d8a5d04ca8c8fa722e61-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cfb1db1a1d15d8a5d04ca8c8fa722e61 2023-07-12 20:18:31,389 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=136 2023-07-12 20:18:31,389 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=136, state=SUCCESS; OpenRegionProcedure 82ce5906f32202c904911581de608a8f, server=jenkins-hbase4.apache.org,46283,1689193085424 in 190 msec 2023-07-12 20:18:31,389 DEBUG [StoreOpener-cfb1db1a1d15d8a5d04ca8c8fa722e61-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/cfb1db1a1d15d8a5d04ca8c8fa722e61/f 2023-07-12 20:18:31,389 DEBUG [StoreOpener-cfb1db1a1d15d8a5d04ca8c8fa722e61-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/cfb1db1a1d15d8a5d04ca8c8fa722e61/f 2023-07-12 20:18:31,390 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=82ce5906f32202c904911581de608a8f, ASSIGN in 352 msec 2023-07-12 20:18:31,390 INFO [StoreOpener-cfb1db1a1d15d8a5d04ca8c8fa722e61-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cfb1db1a1d15d8a5d04ca8c8fa722e61 columnFamilyName f 2023-07-12 20:18:31,390 INFO [StoreOpener-cfb1db1a1d15d8a5d04ca8c8fa722e61-1] regionserver.HStore(310): Store=cfb1db1a1d15d8a5d04ca8c8fa722e61/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:31,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/cfb1db1a1d15d8a5d04ca8c8fa722e61 2023-07-12 20:18:31,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/cfb1db1a1d15d8a5d04ca8c8fa722e61 2023-07-12 20:18:31,393 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cfb1db1a1d15d8a5d04ca8c8fa722e61 2023-07-12 20:18:31,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/cfb1db1a1d15d8a5d04ca8c8fa722e61/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:31,395 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cfb1db1a1d15d8a5d04ca8c8fa722e61; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11745290240, jitterRate=0.09386539459228516}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:31,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cfb1db1a1d15d8a5d04ca8c8fa722e61: 2023-07-12 20:18:31,396 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61., pid=145, masterSystemTime=1689193111349 2023-07-12 20:18:31,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61. 2023-07-12 20:18:31,397 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61. 2023-07-12 20:18:31,397 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=cfb1db1a1d15d8a5d04ca8c8fa722e61, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:31,398 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689193111397"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193111397"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193111397"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193111397"}]},"ts":"1689193111397"} 2023-07-12 20:18:31,400 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=140 2023-07-12 20:18:31,400 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=140, state=SUCCESS; OpenRegionProcedure cfb1db1a1d15d8a5d04ca8c8fa722e61, server=jenkins-hbase4.apache.org,46283,1689193085424 in 198 msec 2023-07-12 20:18:31,401 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=135 2023-07-12 20:18:31,401 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cfb1db1a1d15d8a5d04ca8c8fa722e61, ASSIGN in 363 msec 2023-07-12 20:18:31,401 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 20:18:31,402 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193111402"}]},"ts":"1689193111402"} 2023-07-12 20:18:31,403 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-12 20:18:31,405 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 20:18:31,406 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=135, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 528 msec 2023-07-12 20:18:31,435 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testDisabledTableMove' 2023-07-12 20:18:31,436 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-12 20:18:31,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-12 20:18:31,487 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 135 completed 2023-07-12 20:18:31,488 DEBUG [Listener at localhost/36071] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-12 20:18:31,488 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:31,492 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-12 20:18:31,492 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:31,492 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-12 20:18:31,493 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:31,499 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-12 20:18:31,499 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 20:18:31,500 INFO [Listener at localhost/36071] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-12 20:18:31,500 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-12 20:18:31,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=146, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-12 20:18:31,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-12 20:18:31,504 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193111504"}]},"ts":"1689193111504"} 2023-07-12 20:18:31,505 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-12 20:18:31,506 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-12 20:18:31,507 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=82ce5906f32202c904911581de608a8f, UNASSIGN}, {pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f26b05a722607b99fd58a04e0b0310d9, UNASSIGN}, {pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6bb5331df9ba6c48590d8c388b7b18f5, UNASSIGN}, {pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cc3b1a28541b01b2b9ef8e5b6f0aec1e, UNASSIGN}, {pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cfb1db1a1d15d8a5d04ca8c8fa722e61, UNASSIGN}] 2023-07-12 20:18:31,511 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f26b05a722607b99fd58a04e0b0310d9, UNASSIGN 2023-07-12 20:18:31,511 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=82ce5906f32202c904911581de608a8f, UNASSIGN 2023-07-12 20:18:31,511 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cfb1db1a1d15d8a5d04ca8c8fa722e61, UNASSIGN 2023-07-12 20:18:31,511 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cc3b1a28541b01b2b9ef8e5b6f0aec1e, UNASSIGN 2023-07-12 20:18:31,511 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6bb5331df9ba6c48590d8c388b7b18f5, UNASSIGN 2023-07-12 20:18:31,511 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=f26b05a722607b99fd58a04e0b0310d9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:31,512 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689193111511"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193111511"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193111511"}]},"ts":"1689193111511"} 2023-07-12 20:18:31,512 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=82ce5906f32202c904911581de608a8f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:31,512 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=cfb1db1a1d15d8a5d04ca8c8fa722e61, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:31,512 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689193111512"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193111512"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193111512"}]},"ts":"1689193111512"} 2023-07-12 20:18:31,512 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689193111512"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193111512"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193111512"}]},"ts":"1689193111512"} 2023-07-12 20:18:31,512 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=6bb5331df9ba6c48590d8c388b7b18f5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:31,512 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689193111512"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193111512"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193111512"}]},"ts":"1689193111512"} 2023-07-12 20:18:31,512 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=cc3b1a28541b01b2b9ef8e5b6f0aec1e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:31,512 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689193111512"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193111512"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193111512"}]},"ts":"1689193111512"} 2023-07-12 20:18:31,513 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=148, state=RUNNABLE; CloseRegionProcedure f26b05a722607b99fd58a04e0b0310d9, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:31,513 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=147, state=RUNNABLE; CloseRegionProcedure 82ce5906f32202c904911581de608a8f, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:31,514 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=154, ppid=151, state=RUNNABLE; CloseRegionProcedure cfb1db1a1d15d8a5d04ca8c8fa722e61, server=jenkins-hbase4.apache.org,46283,1689193085424}] 2023-07-12 20:18:31,515 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=155, ppid=149, state=RUNNABLE; CloseRegionProcedure 6bb5331df9ba6c48590d8c388b7b18f5, server=jenkins-hbase4.apache.org,43429,1689193089109}] 2023-07-12 20:18:31,515 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=156, ppid=150, state=RUNNABLE; CloseRegionProcedure cc3b1a28541b01b2b9ef8e5b6f0aec1e, server=jenkins-hbase4.apache.org,43429,1689193089109}] 2023-07-12 20:18:31,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-12 20:18:31,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f26b05a722607b99fd58a04e0b0310d9 2023-07-12 20:18:31,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f26b05a722607b99fd58a04e0b0310d9, disabling compactions & flushes 2023-07-12 20:18:31,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9. 2023-07-12 20:18:31,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9. 2023-07-12 20:18:31,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9. after waiting 0 ms 2023-07-12 20:18:31,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9. 2023-07-12 20:18:31,666 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cc3b1a28541b01b2b9ef8e5b6f0aec1e 2023-07-12 20:18:31,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cc3b1a28541b01b2b9ef8e5b6f0aec1e, disabling compactions & flushes 2023-07-12 20:18:31,667 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e. 2023-07-12 20:18:31,668 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e. 2023-07-12 20:18:31,668 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e. after waiting 0 ms 2023-07-12 20:18:31,668 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e. 2023-07-12 20:18:31,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/f26b05a722607b99fd58a04e0b0310d9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:31,671 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/cc3b1a28541b01b2b9ef8e5b6f0aec1e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:31,671 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9. 2023-07-12 20:18:31,671 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f26b05a722607b99fd58a04e0b0310d9: 2023-07-12 20:18:31,672 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e. 2023-07-12 20:18:31,672 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cc3b1a28541b01b2b9ef8e5b6f0aec1e: 2023-07-12 20:18:31,673 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f26b05a722607b99fd58a04e0b0310d9 2023-07-12 20:18:31,673 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 82ce5906f32202c904911581de608a8f 2023-07-12 20:18:31,674 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 82ce5906f32202c904911581de608a8f, disabling compactions & flushes 2023-07-12 20:18:31,674 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f. 2023-07-12 20:18:31,674 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f. 2023-07-12 20:18:31,674 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f. after waiting 0 ms 2023-07-12 20:18:31,674 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f. 2023-07-12 20:18:31,675 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=f26b05a722607b99fd58a04e0b0310d9, regionState=CLOSED 2023-07-12 20:18:31,675 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689193111675"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193111675"}]},"ts":"1689193111675"} 2023-07-12 20:18:31,675 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cc3b1a28541b01b2b9ef8e5b6f0aec1e 2023-07-12 20:18:31,675 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6bb5331df9ba6c48590d8c388b7b18f5 2023-07-12 20:18:31,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6bb5331df9ba6c48590d8c388b7b18f5, disabling compactions & flushes 2023-07-12 20:18:31,676 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5. 2023-07-12 20:18:31,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5. 2023-07-12 20:18:31,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5. after waiting 0 ms 2023-07-12 20:18:31,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5. 2023-07-12 20:18:31,677 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=cc3b1a28541b01b2b9ef8e5b6f0aec1e, regionState=CLOSED 2023-07-12 20:18:31,677 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689193111677"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193111677"}]},"ts":"1689193111677"} 2023-07-12 20:18:31,679 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/82ce5906f32202c904911581de608a8f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:31,680 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f. 2023-07-12 20:18:31,680 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 82ce5906f32202c904911581de608a8f: 2023-07-12 20:18:31,681 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=148 2023-07-12 20:18:31,681 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/6bb5331df9ba6c48590d8c388b7b18f5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:31,681 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=148, state=SUCCESS; CloseRegionProcedure f26b05a722607b99fd58a04e0b0310d9, server=jenkins-hbase4.apache.org,46283,1689193085424 in 165 msec 2023-07-12 20:18:31,682 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=156, resume processing ppid=150 2023-07-12 20:18:31,682 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5. 2023-07-12 20:18:31,682 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6bb5331df9ba6c48590d8c388b7b18f5: 2023-07-12 20:18:31,682 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 82ce5906f32202c904911581de608a8f 2023-07-12 20:18:31,682 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=156, ppid=150, state=SUCCESS; CloseRegionProcedure cc3b1a28541b01b2b9ef8e5b6f0aec1e, server=jenkins-hbase4.apache.org,43429,1689193089109 in 164 msec 2023-07-12 20:18:31,682 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cfb1db1a1d15d8a5d04ca8c8fa722e61 2023-07-12 20:18:31,683 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cfb1db1a1d15d8a5d04ca8c8fa722e61, disabling compactions & flushes 2023-07-12 20:18:31,683 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61. 2023-07-12 20:18:31,683 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61. 2023-07-12 20:18:31,683 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61. after waiting 0 ms 2023-07-12 20:18:31,683 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61. 2023-07-12 20:18:31,684 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f26b05a722607b99fd58a04e0b0310d9, UNASSIGN in 174 msec 2023-07-12 20:18:31,684 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=82ce5906f32202c904911581de608a8f, regionState=CLOSED 2023-07-12 20:18:31,684 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689193111684"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193111684"}]},"ts":"1689193111684"} 2023-07-12 20:18:31,684 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6bb5331df9ba6c48590d8c388b7b18f5 2023-07-12 20:18:31,685 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cc3b1a28541b01b2b9ef8e5b6f0aec1e, UNASSIGN in 175 msec 2023-07-12 20:18:31,685 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=6bb5331df9ba6c48590d8c388b7b18f5, regionState=CLOSED 2023-07-12 20:18:31,685 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689193111685"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193111685"}]},"ts":"1689193111685"} 2023-07-12 20:18:31,688 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=147 2023-07-12 20:18:31,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/Group_testDisabledTableMove/cfb1db1a1d15d8a5d04ca8c8fa722e61/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:31,688 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=147, state=SUCCESS; CloseRegionProcedure 82ce5906f32202c904911581de608a8f, server=jenkins-hbase4.apache.org,46283,1689193085424 in 173 msec 2023-07-12 20:18:31,689 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61. 2023-07-12 20:18:31,689 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cfb1db1a1d15d8a5d04ca8c8fa722e61: 2023-07-12 20:18:31,689 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=155, resume processing ppid=149 2023-07-12 20:18:31,690 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=82ce5906f32202c904911581de608a8f, UNASSIGN in 181 msec 2023-07-12 20:18:31,690 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=155, ppid=149, state=SUCCESS; CloseRegionProcedure 6bb5331df9ba6c48590d8c388b7b18f5, server=jenkins-hbase4.apache.org,43429,1689193089109 in 172 msec 2023-07-12 20:18:31,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cfb1db1a1d15d8a5d04ca8c8fa722e61 2023-07-12 20:18:31,691 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=6bb5331df9ba6c48590d8c388b7b18f5, UNASSIGN in 183 msec 2023-07-12 20:18:31,691 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=cfb1db1a1d15d8a5d04ca8c8fa722e61, regionState=CLOSED 2023-07-12 20:18:31,691 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689193111691"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193111691"}]},"ts":"1689193111691"} 2023-07-12 20:18:31,693 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=154, resume processing ppid=151 2023-07-12 20:18:31,693 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=154, ppid=151, state=SUCCESS; CloseRegionProcedure cfb1db1a1d15d8a5d04ca8c8fa722e61, server=jenkins-hbase4.apache.org,46283,1689193085424 in 178 msec 2023-07-12 20:18:31,694 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=146 2023-07-12 20:18:31,694 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cfb1db1a1d15d8a5d04ca8c8fa722e61, UNASSIGN in 186 msec 2023-07-12 20:18:31,695 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193111695"}]},"ts":"1689193111695"} 2023-07-12 20:18:31,696 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-12 20:18:31,701 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-12 20:18:31,703 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=146, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 202 msec 2023-07-12 20:18:31,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-12 20:18:31,806 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 146 completed 2023-07-12 20:18:31,806 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1175230447 2023-07-12 20:18:31,808 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1175230447 2023-07-12 20:18:31,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1175230447 2023-07-12 20:18:31,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:31,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:31,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:31,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-12 20:18:31,814 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1175230447, current retry=0 2023-07-12 20:18:31,814 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1175230447. 2023-07-12 20:18:31,814 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:31,819 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:31,819 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:31,822 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-12 20:18:31,822 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 20:18:31,824 INFO [Listener at localhost/36071] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-12 20:18:31,826 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-12 20:18:31,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:31,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 922 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:46566 deadline: 1689193171826, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-12 20:18:31,827 DEBUG [Listener at localhost/36071] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-12 20:18:31,831 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-12 20:18:31,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] procedure2.ProcedureExecutor(1029): Stored pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 20:18:31,834 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 20:18:31,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1175230447' 2023-07-12 20:18:31,835 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=158, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 20:18:31,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1175230447 2023-07-12 20:18:31,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:31,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:31,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:31,842 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/82ce5906f32202c904911581de608a8f 2023-07-12 20:18:31,842 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/cfb1db1a1d15d8a5d04ca8c8fa722e61 2023-07-12 20:18:31,842 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/6bb5331df9ba6c48590d8c388b7b18f5 2023-07-12 20:18:31,842 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/cc3b1a28541b01b2b9ef8e5b6f0aec1e 2023-07-12 20:18:31,842 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/f26b05a722607b99fd58a04e0b0310d9 2023-07-12 20:18:31,845 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/82ce5906f32202c904911581de608a8f/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/82ce5906f32202c904911581de608a8f/recovered.edits] 2023-07-12 20:18:31,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-12 20:18:31,847 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/6bb5331df9ba6c48590d8c388b7b18f5/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/6bb5331df9ba6c48590d8c388b7b18f5/recovered.edits] 2023-07-12 20:18:31,853 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/f26b05a722607b99fd58a04e0b0310d9/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/f26b05a722607b99fd58a04e0b0310d9/recovered.edits] 2023-07-12 20:18:31,853 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/cc3b1a28541b01b2b9ef8e5b6f0aec1e/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/cc3b1a28541b01b2b9ef8e5b6f0aec1e/recovered.edits] 2023-07-12 20:18:31,853 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/cfb1db1a1d15d8a5d04ca8c8fa722e61/f, FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/cfb1db1a1d15d8a5d04ca8c8fa722e61/recovered.edits] 2023-07-12 20:18:31,861 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/82ce5906f32202c904911581de608a8f/recovered.edits/4.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/Group_testDisabledTableMove/82ce5906f32202c904911581de608a8f/recovered.edits/4.seqid 2023-07-12 20:18:31,863 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/82ce5906f32202c904911581de608a8f 2023-07-12 20:18:31,864 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/cc3b1a28541b01b2b9ef8e5b6f0aec1e/recovered.edits/4.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/Group_testDisabledTableMove/cc3b1a28541b01b2b9ef8e5b6f0aec1e/recovered.edits/4.seqid 2023-07-12 20:18:31,864 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/6bb5331df9ba6c48590d8c388b7b18f5/recovered.edits/4.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/Group_testDisabledTableMove/6bb5331df9ba6c48590d8c388b7b18f5/recovered.edits/4.seqid 2023-07-12 20:18:31,865 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/6bb5331df9ba6c48590d8c388b7b18f5 2023-07-12 20:18:31,866 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/cfb1db1a1d15d8a5d04ca8c8fa722e61/recovered.edits/4.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/Group_testDisabledTableMove/cfb1db1a1d15d8a5d04ca8c8fa722e61/recovered.edits/4.seqid 2023-07-12 20:18:31,866 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/cc3b1a28541b01b2b9ef8e5b6f0aec1e 2023-07-12 20:18:31,866 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/cfb1db1a1d15d8a5d04ca8c8fa722e61 2023-07-12 20:18:31,867 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/f26b05a722607b99fd58a04e0b0310d9/recovered.edits/4.seqid to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/archive/data/default/Group_testDisabledTableMove/f26b05a722607b99fd58a04e0b0310d9/recovered.edits/4.seqid 2023-07-12 20:18:31,868 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/.tmp/data/default/Group_testDisabledTableMove/f26b05a722607b99fd58a04e0b0310d9 2023-07-12 20:18:31,868 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-12 20:18:31,871 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=158, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 20:18:31,873 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-12 20:18:31,877 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-12 20:18:31,878 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=158, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 20:18:31,878 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-12 20:18:31,878 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193111878"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:31,879 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193111878"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:31,879 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193111878"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:31,879 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193111878"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:31,879 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193111878"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:31,880 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 20:18:31,880 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 82ce5906f32202c904911581de608a8f, NAME => 'Group_testDisabledTableMove,,1689193110877.82ce5906f32202c904911581de608a8f.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => f26b05a722607b99fd58a04e0b0310d9, NAME => 'Group_testDisabledTableMove,aaaaa,1689193110877.f26b05a722607b99fd58a04e0b0310d9.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 6bb5331df9ba6c48590d8c388b7b18f5, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689193110877.6bb5331df9ba6c48590d8c388b7b18f5.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => cc3b1a28541b01b2b9ef8e5b6f0aec1e, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689193110877.cc3b1a28541b01b2b9ef8e5b6f0aec1e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => cfb1db1a1d15d8a5d04ca8c8fa722e61, NAME => 'Group_testDisabledTableMove,zzzzz,1689193110877.cfb1db1a1d15d8a5d04ca8c8fa722e61.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 20:18:31,880 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-12 20:18:31,881 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689193111881"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:31,882 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-12 20:18:31,885 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=158, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 20:18:31,886 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=158, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 54 msec 2023-07-12 20:18:31,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-12 20:18:31,947 INFO [Listener at localhost/36071] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 158 completed 2023-07-12 20:18:31,950 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:31,951 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:31,951 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:31,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:31,951 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:31,952 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567] to rsgroup default 2023-07-12 20:18:31,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1175230447 2023-07-12 20:18:31,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:31,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:31,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:31,956 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1175230447, current retry=0 2023-07-12 20:18:31,956 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39187,1689193085232, jenkins-hbase4.apache.org,41567,1689193085044] are moved back to Group_testDisabledTableMove_1175230447 2023-07-12 20:18:31,956 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1175230447 => default 2023-07-12 20:18:31,956 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:31,957 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1175230447 2023-07-12 20:18:31,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:31,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:31,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 20:18:31,962 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:31,962 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:31,963 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:31,963 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:31,963 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:31,963 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:31,964 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:31,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:31,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:31,968 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:31,971 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:31,972 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:31,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:31,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:31,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:31,977 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:31,980 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:31,980 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:31,982 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42533] to rsgroup master 2023-07-12 20:18:31,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:31,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 956 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46566 deadline: 1689194311982, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. 2023-07-12 20:18:31,983 WARN [Listener at localhost/36071] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:31,985 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:31,985 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:31,986 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:31,986 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:46283], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:31,986 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:31,986 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:32,008 INFO [Listener at localhost/36071] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=504 (was 503) Potentially hanging thread: hconnection-0x5fc06702-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1777517396_17 at /127.0.0.1:56944 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5275ffcd-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=777 (was 752) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=558 (was 558), ProcessCount=171 (was 173), AvailableMemoryMB=6251 (was 4178) - AvailableMemoryMB LEAK? - 2023-07-12 20:18:32,008 WARN [Listener at localhost/36071] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-12 20:18:32,026 INFO [Listener at localhost/36071] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=504, OpenFileDescriptor=777, MaxFileDescriptor=60000, SystemLoadAverage=558, ProcessCount=172, AvailableMemoryMB=6250 2023-07-12 20:18:32,026 WARN [Listener at localhost/36071] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-12 20:18:32,026 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-12 20:18:32,030 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:32,030 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:32,031 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:32,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:32,031 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:32,032 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:32,032 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:32,033 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:32,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:32,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:32,038 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:32,042 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:32,043 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:32,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:32,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:32,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:32,049 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:32,051 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:32,051 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:32,053 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42533] to rsgroup master 2023-07-12 20:18:32,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:32,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] ipc.CallRunner(144): callId: 984 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:46566 deadline: 1689194312053, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. 2023-07-12 20:18:32,053 WARN [Listener at localhost/36071] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:32,055 INFO [Listener at localhost/36071] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:32,056 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:32,056 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:32,056 INFO [Listener at localhost/36071] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39187, jenkins-hbase4.apache.org:41567, jenkins-hbase4.apache.org:43429, jenkins-hbase4.apache.org:46283], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:32,057 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:32,057 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:32,057 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 20:18:32,057 INFO [Listener at localhost/36071] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 20:18:32,057 DEBUG [Listener at localhost/36071] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x59ea51e3 to 127.0.0.1:51228 2023-07-12 20:18:32,058 DEBUG [Listener at localhost/36071] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:32,060 DEBUG [Listener at localhost/36071] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 20:18:32,060 DEBUG [Listener at localhost/36071] util.JVMClusterUtil(257): Found active master hash=1592297455, stopped=false 2023-07-12 20:18:32,060 DEBUG [Listener at localhost/36071] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 20:18:32,060 DEBUG [Listener at localhost/36071] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 20:18:32,061 INFO [Listener at localhost/36071] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,42533,1689193083113 2023-07-12 20:18:32,065 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:32,065 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:32,065 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:43429-0x1015b2f7032000b, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:32,065 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:32,065 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:32,065 INFO [Listener at localhost/36071] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 20:18:32,065 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:32,066 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:32,065 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:32,066 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43429-0x1015b2f7032000b, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:32,066 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:32,066 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:32,066 DEBUG [Listener at localhost/36071] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0fb2cca5 to 127.0.0.1:51228 2023-07-12 20:18:32,067 DEBUG [Listener at localhost/36071] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:32,067 INFO [Listener at localhost/36071] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41567,1689193085044' ***** 2023-07-12 20:18:32,067 INFO [Listener at localhost/36071] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 20:18:32,067 INFO [RS:0;jenkins-hbase4:41567] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 20:18:32,069 INFO [Listener at localhost/36071] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39187,1689193085232' ***** 2023-07-12 20:18:32,069 INFO [Listener at localhost/36071] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 20:18:32,073 INFO [Listener at localhost/36071] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46283,1689193085424' ***** 2023-07-12 20:18:32,073 INFO [Listener at localhost/36071] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 20:18:32,073 INFO [RS:1;jenkins-hbase4:39187] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 20:18:32,073 INFO [Listener at localhost/36071] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43429,1689193089109' ***** 2023-07-12 20:18:32,073 INFO [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 20:18:32,075 INFO [Listener at localhost/36071] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 20:18:32,075 INFO [RS:3;jenkins-hbase4:43429] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 20:18:32,079 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:32,082 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 20:18:32,082 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 20:18:32,082 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:32,082 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 20:18:32,082 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:32,082 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:32,086 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 20:18:32,091 INFO [RS:0;jenkins-hbase4:41567] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@609673c2{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:32,091 INFO [RS:3;jenkins-hbase4:43429] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@16958b7c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:32,091 INFO [RS:1;jenkins-hbase4:39187] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@10e2164e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:32,091 INFO [RS:2;jenkins-hbase4:46283] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@42be2682{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:32,096 INFO [RS:2;jenkins-hbase4:46283] server.AbstractConnector(383): Stopped ServerConnector@68f85add{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 20:18:32,096 INFO [RS:1;jenkins-hbase4:39187] server.AbstractConnector(383): Stopped ServerConnector@41fa53df{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 20:18:32,096 INFO [RS:3;jenkins-hbase4:43429] server.AbstractConnector(383): Stopped ServerConnector@1d8aa3aa{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 20:18:32,096 INFO [RS:0;jenkins-hbase4:41567] server.AbstractConnector(383): Stopped ServerConnector@79947ad{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 20:18:32,096 INFO [RS:3;jenkins-hbase4:43429] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 20:18:32,096 INFO [RS:1;jenkins-hbase4:39187] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 20:18:32,096 INFO [RS:2;jenkins-hbase4:46283] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 20:18:32,097 INFO [RS:3;jenkins-hbase4:43429] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@68aba549{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 20:18:32,098 INFO [RS:1;jenkins-hbase4:39187] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6221cb1e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 20:18:32,099 INFO [RS:2;jenkins-hbase4:46283] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@66807c4c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 20:18:32,096 INFO [RS:0;jenkins-hbase4:41567] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 20:18:32,099 INFO [RS:1;jenkins-hbase4:39187] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6e075be0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/hadoop.log.dir/,STOPPED} 2023-07-12 20:18:32,099 INFO [RS:3;jenkins-hbase4:43429] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@f357cde{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/hadoop.log.dir/,STOPPED} 2023-07-12 20:18:32,101 INFO [RS:0;jenkins-hbase4:41567] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3fd26991{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 20:18:32,100 INFO [RS:2;jenkins-hbase4:46283] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3b4c0447{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/hadoop.log.dir/,STOPPED} 2023-07-12 20:18:32,101 INFO [RS:0;jenkins-hbase4:41567] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3751ed89{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/hadoop.log.dir/,STOPPED} 2023-07-12 20:18:32,104 INFO [RS:2;jenkins-hbase4:46283] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 20:18:32,105 INFO [RS:2;jenkins-hbase4:46283] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 20:18:32,105 INFO [RS:2;jenkins-hbase4:46283] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 20:18:32,105 INFO [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(3305): Received CLOSE for 455649b011ddbbda985bd47060a43b64 2023-07-12 20:18:32,105 INFO [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(3305): Received CLOSE for aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:32,105 INFO [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(3305): Received CLOSE for 6777c5b3891de176411b89338412bae7 2023-07-12 20:18:32,105 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 455649b011ddbbda985bd47060a43b64, disabling compactions & flushes 2023-07-12 20:18:32,105 INFO [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:32,105 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:32,105 DEBUG [RS:2;jenkins-hbase4:46283] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x06403b8b to 127.0.0.1:51228 2023-07-12 20:18:32,105 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:32,105 DEBUG [RS:2;jenkins-hbase4:46283] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:32,105 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. after waiting 0 ms 2023-07-12 20:18:32,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:32,106 INFO [RS:2;jenkins-hbase4:46283] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 20:18:32,107 INFO [RS:3;jenkins-hbase4:43429] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 20:18:32,107 INFO [RS:2;jenkins-hbase4:46283] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 20:18:32,107 INFO [RS:3;jenkins-hbase4:43429] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 20:18:32,107 INFO [RS:2;jenkins-hbase4:46283] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 20:18:32,109 INFO [RS:0;jenkins-hbase4:41567] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 20:18:32,109 INFO [RS:0;jenkins-hbase4:41567] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 20:18:32,109 INFO [RS:0;jenkins-hbase4:41567] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 20:18:32,109 INFO [RS:0;jenkins-hbase4:41567] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:32,110 DEBUG [RS:0;jenkins-hbase4:41567] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6bbc0914 to 127.0.0.1:51228 2023-07-12 20:18:32,110 DEBUG [RS:0;jenkins-hbase4:41567] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:32,110 INFO [RS:0;jenkins-hbase4:41567] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41567,1689193085044; all regions closed. 2023-07-12 20:18:32,108 INFO [RS:1;jenkins-hbase4:39187] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 20:18:32,110 INFO [RS:1;jenkins-hbase4:39187] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 20:18:32,110 INFO [RS:1;jenkins-hbase4:39187] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 20:18:32,110 INFO [RS:1;jenkins-hbase4:39187] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:32,107 INFO [RS:3;jenkins-hbase4:43429] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 20:18:32,110 DEBUG [RS:1;jenkins-hbase4:39187] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x407aac69 to 127.0.0.1:51228 2023-07-12 20:18:32,110 DEBUG [RS:1;jenkins-hbase4:39187] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:32,110 INFO [RS:1;jenkins-hbase4:39187] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39187,1689193085232; all regions closed. 2023-07-12 20:18:32,109 INFO [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 20:18:32,110 INFO [RS:3;jenkins-hbase4:43429] regionserver.HRegionServer(3305): Received CLOSE for 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:32,111 INFO [RS:3;jenkins-hbase4:43429] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:32,111 DEBUG [RS:3;jenkins-hbase4:43429] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5663bd11 to 127.0.0.1:51228 2023-07-12 20:18:32,111 DEBUG [RS:3;jenkins-hbase4:43429] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:32,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5590a98ce5dcc4d33b0fc067112783c0, disabling compactions & flushes 2023-07-12 20:18:32,115 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:32,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:32,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. after waiting 0 ms 2023-07-12 20:18:32,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:32,115 INFO [RS:3;jenkins-hbase4:43429] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 20:18:32,116 DEBUG [RS:3;jenkins-hbase4:43429] regionserver.HRegionServer(1478): Online Regions={5590a98ce5dcc4d33b0fc067112783c0=testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0.} 2023-07-12 20:18:32,119 DEBUG [RS:3;jenkins-hbase4:43429] regionserver.HRegionServer(1504): Waiting on 5590a98ce5dcc4d33b0fc067112783c0 2023-07-12 20:18:32,133 INFO [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-12 20:18:32,133 DEBUG [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(1478): Online Regions={455649b011ddbbda985bd47060a43b64=hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64., 1588230740=hbase:meta,,1.1588230740, aa1db639fdc668f9efd7f5e68d620495=hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495., 6777c5b3891de176411b89338412bae7=unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7.} 2023-07-12 20:18:32,133 DEBUG [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(1504): Waiting on 1588230740, 455649b011ddbbda985bd47060a43b64, 6777c5b3891de176411b89338412bae7, aa1db639fdc668f9efd7f5e68d620495 2023-07-12 20:18:32,134 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/namespace/455649b011ddbbda985bd47060a43b64/recovered.edits/15.seqid, newMaxSeqId=15, maxSeqId=12 2023-07-12 20:18:32,134 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 20:18:32,134 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 20:18:32,134 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 20:18:32,134 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 20:18:32,135 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 20:18:32,135 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=82.47 KB heapSize=130.27 KB 2023-07-12 20:18:32,136 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:32,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 455649b011ddbbda985bd47060a43b64: 2023-07-12 20:18:32,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689193088045.455649b011ddbbda985bd47060a43b64. 2023-07-12 20:18:32,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing aa1db639fdc668f9efd7f5e68d620495, disabling compactions & flushes 2023-07-12 20:18:32,136 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:32,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:32,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. after waiting 0 ms 2023-07-12 20:18:32,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:32,136 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing aa1db639fdc668f9efd7f5e68d620495 1/1 column families, dataSize=22.10 KB heapSize=36.55 KB 2023-07-12 20:18:32,149 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/testRename/5590a98ce5dcc4d33b0fc067112783c0/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 20:18:32,149 DEBUG [RS:0;jenkins-hbase4:41567] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/oldWALs 2023-07-12 20:18:32,149 INFO [RS:0;jenkins-hbase4:41567] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41567%2C1689193085044:(num 1689193087575) 2023-07-12 20:18:32,149 DEBUG [RS:0;jenkins-hbase4:41567] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:32,149 INFO [RS:0;jenkins-hbase4:41567] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:32,149 INFO [RS:0;jenkins-hbase4:41567] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 20:18:32,150 INFO [RS:0;jenkins-hbase4:41567] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 20:18:32,150 INFO [RS:0;jenkins-hbase4:41567] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 20:18:32,150 INFO [RS:0;jenkins-hbase4:41567] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 20:18:32,150 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 20:18:32,151 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:32,151 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5590a98ce5dcc4d33b0fc067112783c0: 2023-07-12 20:18:32,151 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689193105219.5590a98ce5dcc4d33b0fc067112783c0. 2023-07-12 20:18:32,152 INFO [RS:0;jenkins-hbase4:41567] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41567 2023-07-12 20:18:32,167 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:43429-0x1015b2f7032000b, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:32,167 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:32,168 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:43429-0x1015b2f7032000b, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:32,168 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:32,168 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:32,168 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:32,168 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:32,167 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41567,1689193085044 2023-07-12 20:18:32,169 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:32,169 DEBUG [RS:1;jenkins-hbase4:39187] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/oldWALs 2023-07-12 20:18:32,169 INFO [RS:1;jenkins-hbase4:39187] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39187%2C1689193085232:(num 1689193087571) 2023-07-12 20:18:32,169 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41567,1689193085044] 2023-07-12 20:18:32,169 DEBUG [RS:1;jenkins-hbase4:39187] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:32,170 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41567,1689193085044; numProcessing=1 2023-07-12 20:18:32,170 INFO [RS:1;jenkins-hbase4:39187] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:32,171 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41567,1689193085044 already deleted, retry=false 2023-07-12 20:18:32,171 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41567,1689193085044 expired; onlineServers=3 2023-07-12 20:18:32,175 INFO [RS:1;jenkins-hbase4:39187] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 20:18:32,178 INFO [RS:1;jenkins-hbase4:39187] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 20:18:32,179 INFO [RS:1;jenkins-hbase4:39187] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 20:18:32,179 INFO [RS:1;jenkins-hbase4:39187] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 20:18:32,178 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 20:18:32,180 INFO [RS:1;jenkins-hbase4:39187] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39187 2023-07-12 20:18:32,184 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.10 KB at sequenceid=107 (bloomFilter=true), to=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/.tmp/m/b3aa3234ab3149438d21fffdebfe0d17 2023-07-12 20:18:32,185 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:32,185 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:43429-0x1015b2f7032000b, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:32,185 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:32,185 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39187,1689193085232 2023-07-12 20:18:32,191 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39187,1689193085232] 2023-07-12 20:18:32,191 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39187,1689193085232; numProcessing=2 2023-07-12 20:18:32,192 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39187,1689193085232 already deleted, retry=false 2023-07-12 20:18:32,192 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39187,1689193085232 expired; onlineServers=2 2023-07-12 20:18:32,198 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b3aa3234ab3149438d21fffdebfe0d17 2023-07-12 20:18:32,199 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/.tmp/m/b3aa3234ab3149438d21fffdebfe0d17 as hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/m/b3aa3234ab3149438d21fffdebfe0d17 2023-07-12 20:18:32,200 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.48 KB at sequenceid=212 (bloomFilter=false), to=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/.tmp/info/0c0e78ab8af1470785520af90eb5aeeb 2023-07-12 20:18:32,206 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0c0e78ab8af1470785520af90eb5aeeb 2023-07-12 20:18:32,206 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b3aa3234ab3149438d21fffdebfe0d17 2023-07-12 20:18:32,207 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/m/b3aa3234ab3149438d21fffdebfe0d17, entries=22, sequenceid=107, filesize=5.9 K 2023-07-12 20:18:32,208 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.10 KB/22631, heapSize ~36.53 KB/37408, currentSize=0 B/0 for aa1db639fdc668f9efd7f5e68d620495 in 72ms, sequenceid=107, compaction requested=true 2023-07-12 20:18:32,208 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 20:18:32,219 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/rsgroup/aa1db639fdc668f9efd7f5e68d620495/recovered.edits/110.seqid, newMaxSeqId=110, maxSeqId=35 2023-07-12 20:18:32,220 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 20:18:32,220 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:32,220 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for aa1db639fdc668f9efd7f5e68d620495: 2023-07-12 20:18:32,220 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689193088226.aa1db639fdc668f9efd7f5e68d620495. 2023-07-12 20:18:32,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6777c5b3891de176411b89338412bae7, disabling compactions & flushes 2023-07-12 20:18:32,221 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:32,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:32,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. after waiting 0 ms 2023-07-12 20:18:32,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:32,227 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=212 (bloomFilter=false), to=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/.tmp/rep_barrier/56d5e61511104a8aa5ca4cb3fab22a91 2023-07-12 20:18:32,227 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/default/unmovedTable/6777c5b3891de176411b89338412bae7/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 20:18:32,228 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:32,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6777c5b3891de176411b89338412bae7: 2023-07-12 20:18:32,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689193106869.6777c5b3891de176411b89338412bae7. 2023-07-12 20:18:32,233 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 56d5e61511104a8aa5ca4cb3fab22a91 2023-07-12 20:18:32,248 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.99 KB at sequenceid=212 (bloomFilter=false), to=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/.tmp/table/9f2aa3338e744dd9a0ce8c34926c11d2 2023-07-12 20:18:32,254 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9f2aa3338e744dd9a0ce8c34926c11d2 2023-07-12 20:18:32,255 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/.tmp/info/0c0e78ab8af1470785520af90eb5aeeb as hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/info/0c0e78ab8af1470785520af90eb5aeeb 2023-07-12 20:18:32,262 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0c0e78ab8af1470785520af90eb5aeeb 2023-07-12 20:18:32,262 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/info/0c0e78ab8af1470785520af90eb5aeeb, entries=108, sequenceid=212, filesize=17.2 K 2023-07-12 20:18:32,263 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/.tmp/rep_barrier/56d5e61511104a8aa5ca4cb3fab22a91 as hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/rep_barrier/56d5e61511104a8aa5ca4cb3fab22a91 2023-07-12 20:18:32,270 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 56d5e61511104a8aa5ca4cb3fab22a91 2023-07-12 20:18:32,270 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/rep_barrier/56d5e61511104a8aa5ca4cb3fab22a91, entries=18, sequenceid=212, filesize=6.9 K 2023-07-12 20:18:32,271 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/.tmp/table/9f2aa3338e744dd9a0ce8c34926c11d2 as hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/table/9f2aa3338e744dd9a0ce8c34926c11d2 2023-07-12 20:18:32,277 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9f2aa3338e744dd9a0ce8c34926c11d2 2023-07-12 20:18:32,277 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/table/9f2aa3338e744dd9a0ce8c34926c11d2, entries=31, sequenceid=212, filesize=7.4 K 2023-07-12 20:18:32,278 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~82.47 KB/84449, heapSize ~130.23 KB/133352, currentSize=0 B/0 for 1588230740 in 143ms, sequenceid=212, compaction requested=false 2023-07-12 20:18:32,287 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/data/hbase/meta/1588230740/recovered.edits/215.seqid, newMaxSeqId=215, maxSeqId=1 2023-07-12 20:18:32,288 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 20:18:32,288 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 20:18:32,288 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 20:18:32,288 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 20:18:32,319 INFO [RS:3;jenkins-hbase4:43429] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43429,1689193089109; all regions closed. 2023-07-12 20:18:32,327 DEBUG [RS:3;jenkins-hbase4:43429] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/oldWALs 2023-07-12 20:18:32,327 INFO [RS:3;jenkins-hbase4:43429] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43429%2C1689193089109:(num 1689193089558) 2023-07-12 20:18:32,327 DEBUG [RS:3;jenkins-hbase4:43429] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:32,327 INFO [RS:3;jenkins-hbase4:43429] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:32,328 INFO [RS:3;jenkins-hbase4:43429] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 20:18:32,328 INFO [RS:3;jenkins-hbase4:43429] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 20:18:32,328 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 20:18:32,328 INFO [RS:3;jenkins-hbase4:43429] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 20:18:32,328 INFO [RS:3;jenkins-hbase4:43429] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 20:18:32,329 INFO [RS:3;jenkins-hbase4:43429] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43429 2023-07-12 20:18:32,331 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:43429-0x1015b2f7032000b, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:32,331 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:32,331 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43429,1689193089109 2023-07-12 20:18:32,333 INFO [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46283,1689193085424; all regions closed. 2023-07-12 20:18:32,333 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43429,1689193089109] 2023-07-12 20:18:32,333 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43429,1689193089109; numProcessing=3 2023-07-12 20:18:32,334 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43429,1689193089109 already deleted, retry=false 2023-07-12 20:18:32,334 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43429,1689193089109 expired; onlineServers=1 2023-07-12 20:18:32,345 DEBUG [RS:2;jenkins-hbase4:46283] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/oldWALs 2023-07-12 20:18:32,345 INFO [RS:2;jenkins-hbase4:46283] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46283%2C1689193085424.meta:.meta(num 1689193087721) 2023-07-12 20:18:32,353 DEBUG [RS:2;jenkins-hbase4:46283] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/oldWALs 2023-07-12 20:18:32,353 INFO [RS:2;jenkins-hbase4:46283] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46283%2C1689193085424:(num 1689193087577) 2023-07-12 20:18:32,353 DEBUG [RS:2;jenkins-hbase4:46283] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:32,353 INFO [RS:2;jenkins-hbase4:46283] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:32,353 INFO [RS:2;jenkins-hbase4:46283] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 20:18:32,354 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 20:18:32,355 INFO [RS:2;jenkins-hbase4:46283] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46283 2023-07-12 20:18:32,356 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46283,1689193085424 2023-07-12 20:18:32,356 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:32,358 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46283,1689193085424] 2023-07-12 20:18:32,358 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46283,1689193085424; numProcessing=4 2023-07-12 20:18:32,360 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46283,1689193085424 already deleted, retry=false 2023-07-12 20:18:32,360 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46283,1689193085424 expired; onlineServers=0 2023-07-12 20:18:32,360 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42533,1689193083113' ***** 2023-07-12 20:18:32,360 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 20:18:32,361 DEBUG [M:0;jenkins-hbase4:42533] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1fd58dd4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 20:18:32,361 INFO [M:0;jenkins-hbase4:42533] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 20:18:32,365 INFO [M:0;jenkins-hbase4:42533] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5f9ed0a6{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 20:18:32,365 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 20:18:32,365 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:32,365 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 20:18:32,365 INFO [M:0;jenkins-hbase4:42533] server.AbstractConnector(383): Stopped ServerConnector@7d3cded5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 20:18:32,365 INFO [M:0;jenkins-hbase4:42533] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 20:18:32,366 INFO [M:0;jenkins-hbase4:42533] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4ff95bf2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 20:18:32,367 INFO [M:0;jenkins-hbase4:42533] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@48ee05fa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/hadoop.log.dir/,STOPPED} 2023-07-12 20:18:32,367 INFO [M:0;jenkins-hbase4:42533] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42533,1689193083113 2023-07-12 20:18:32,367 INFO [M:0;jenkins-hbase4:42533] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42533,1689193083113; all regions closed. 2023-07-12 20:18:32,367 DEBUG [M:0;jenkins-hbase4:42533] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:32,368 INFO [M:0;jenkins-hbase4:42533] master.HMaster(1491): Stopping master jetty server 2023-07-12 20:18:32,368 INFO [M:0;jenkins-hbase4:42533] server.AbstractConnector(383): Stopped ServerConnector@6e75c809{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 20:18:32,369 DEBUG [M:0;jenkins-hbase4:42533] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 20:18:32,369 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 20:18:32,369 DEBUG [M:0;jenkins-hbase4:42533] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 20:18:32,369 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689193087098] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689193087098,5,FailOnTimeoutGroup] 2023-07-12 20:18:32,369 INFO [M:0;jenkins-hbase4:42533] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 20:18:32,369 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689193087098] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689193087098,5,FailOnTimeoutGroup] 2023-07-12 20:18:32,369 INFO [M:0;jenkins-hbase4:42533] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 20:18:32,370 INFO [M:0;jenkins-hbase4:42533] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-12 20:18:32,370 DEBUG [M:0;jenkins-hbase4:42533] master.HMaster(1512): Stopping service threads 2023-07-12 20:18:32,370 INFO [M:0;jenkins-hbase4:42533] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 20:18:32,370 ERROR [M:0;jenkins-hbase4:42533] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-12 20:18:32,371 INFO [M:0;jenkins-hbase4:42533] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 20:18:32,371 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 20:18:32,371 DEBUG [M:0;jenkins-hbase4:42533] zookeeper.ZKUtil(398): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 20:18:32,372 WARN [M:0;jenkins-hbase4:42533] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 20:18:32,372 INFO [M:0;jenkins-hbase4:42533] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 20:18:32,372 INFO [M:0;jenkins-hbase4:42533] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 20:18:32,372 DEBUG [M:0;jenkins-hbase4:42533] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 20:18:32,372 INFO [M:0;jenkins-hbase4:42533] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:32,372 DEBUG [M:0;jenkins-hbase4:42533] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:32,372 DEBUG [M:0;jenkins-hbase4:42533] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 20:18:32,372 DEBUG [M:0;jenkins-hbase4:42533] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:32,372 INFO [M:0;jenkins-hbase4:42533] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=529.13 KB heapSize=633.28 KB 2023-07-12 20:18:32,399 INFO [M:0;jenkins-hbase4:42533] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=529.13 KB at sequenceid=1176 (bloomFilter=true), to=hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/f39dcfd7bb804d2ea945c794f97a3efb 2023-07-12 20:18:32,407 DEBUG [M:0;jenkins-hbase4:42533] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/f39dcfd7bb804d2ea945c794f97a3efb as hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/f39dcfd7bb804d2ea945c794f97a3efb 2023-07-12 20:18:32,414 INFO [M:0;jenkins-hbase4:42533] regionserver.HStore(1080): Added hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/f39dcfd7bb804d2ea945c794f97a3efb, entries=157, sequenceid=1176, filesize=27.6 K 2023-07-12 20:18:32,415 INFO [M:0;jenkins-hbase4:42533] regionserver.HRegion(2948): Finished flush of dataSize ~529.13 KB/541824, heapSize ~633.27 KB/648464, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 43ms, sequenceid=1176, compaction requested=false 2023-07-12 20:18:32,417 INFO [M:0;jenkins-hbase4:42533] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:32,417 DEBUG [M:0;jenkins-hbase4:42533] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 20:18:32,422 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 20:18:32,422 INFO [M:0;jenkins-hbase4:42533] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 20:18:32,423 INFO [M:0;jenkins-hbase4:42533] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42533 2023-07-12 20:18:32,424 DEBUG [M:0;jenkins-hbase4:42533] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,42533,1689193083113 already deleted, retry=false 2023-07-12 20:18:32,768 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:32,768 INFO [M:0;jenkins-hbase4:42533] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42533,1689193083113; zookeeper connection closed. 2023-07-12 20:18:32,768 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): master:42533-0x1015b2f70320000, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:32,868 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:32,868 INFO [RS:2;jenkins-hbase4:46283] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46283,1689193085424; zookeeper connection closed. 2023-07-12 20:18:32,868 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:46283-0x1015b2f70320003, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:32,869 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@11e1b215] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@11e1b215 2023-07-12 20:18:32,969 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:43429-0x1015b2f7032000b, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:32,969 INFO [RS:3;jenkins-hbase4:43429] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43429,1689193089109; zookeeper connection closed. 2023-07-12 20:18:32,969 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:43429-0x1015b2f7032000b, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:32,969 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@12d002f8] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@12d002f8 2023-07-12 20:18:33,069 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:33,069 INFO [RS:1;jenkins-hbase4:39187] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39187,1689193085232; zookeeper connection closed. 2023-07-12 20:18:33,069 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:39187-0x1015b2f70320002, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:33,069 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6ed0c0d1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6ed0c0d1 2023-07-12 20:18:33,169 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:33,169 INFO [RS:0;jenkins-hbase4:41567] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41567,1689193085044; zookeeper connection closed. 2023-07-12 20:18:33,169 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): regionserver:41567-0x1015b2f70320001, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:33,170 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@28e5cf7a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@28e5cf7a 2023-07-12 20:18:33,170 INFO [Listener at localhost/36071] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-12 20:18:33,170 WARN [Listener at localhost/36071] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 20:18:33,176 INFO [Listener at localhost/36071] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 20:18:33,281 WARN [BP-600225254-172.31.14.131-1689193079268 heartbeating to localhost/127.0.0.1:41485] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 20:18:33,281 WARN [BP-600225254-172.31.14.131-1689193079268 heartbeating to localhost/127.0.0.1:41485] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-600225254-172.31.14.131-1689193079268 (Datanode Uuid e57aebba-cd55-4500-9ed9-ba03d666544d) service to localhost/127.0.0.1:41485 2023-07-12 20:18:33,283 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/cluster_599e48d2-0e92-9211-4f46-ef81fbc5f05c/dfs/data/data5/current/BP-600225254-172.31.14.131-1689193079268] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:33,284 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/cluster_599e48d2-0e92-9211-4f46-ef81fbc5f05c/dfs/data/data6/current/BP-600225254-172.31.14.131-1689193079268] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:33,287 WARN [Listener at localhost/36071] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 20:18:33,294 INFO [Listener at localhost/36071] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 20:18:33,321 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 20:18:33,321 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 20:18:33,321 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 20:18:33,397 WARN [BP-600225254-172.31.14.131-1689193079268 heartbeating to localhost/127.0.0.1:41485] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 20:18:33,397 WARN [BP-600225254-172.31.14.131-1689193079268 heartbeating to localhost/127.0.0.1:41485] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-600225254-172.31.14.131-1689193079268 (Datanode Uuid 5deb8b17-48b3-4e56-9487-507fe6d85b8d) service to localhost/127.0.0.1:41485 2023-07-12 20:18:33,398 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/cluster_599e48d2-0e92-9211-4f46-ef81fbc5f05c/dfs/data/data3/current/BP-600225254-172.31.14.131-1689193079268] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:33,399 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/cluster_599e48d2-0e92-9211-4f46-ef81fbc5f05c/dfs/data/data4/current/BP-600225254-172.31.14.131-1689193079268] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:33,400 WARN [Listener at localhost/36071] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 20:18:33,412 INFO [Listener at localhost/36071] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 20:18:33,519 WARN [BP-600225254-172.31.14.131-1689193079268 heartbeating to localhost/127.0.0.1:41485] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 20:18:33,519 WARN [BP-600225254-172.31.14.131-1689193079268 heartbeating to localhost/127.0.0.1:41485] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-600225254-172.31.14.131-1689193079268 (Datanode Uuid bcc5c7f8-f2ab-463d-a9ca-1fbcbb6b1d3f) service to localhost/127.0.0.1:41485 2023-07-12 20:18:33,520 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/cluster_599e48d2-0e92-9211-4f46-ef81fbc5f05c/dfs/data/data1/current/BP-600225254-172.31.14.131-1689193079268] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:33,520 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/cluster_599e48d2-0e92-9211-4f46-ef81fbc5f05c/dfs/data/data2/current/BP-600225254-172.31.14.131-1689193079268] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:33,553 INFO [Listener at localhost/36071] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 20:18:33,681 INFO [Listener at localhost/36071] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 20:18:33,752 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-12 20:18:33,752 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 20:18:33,752 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/hadoop.log.dir so I do NOT create it in target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea 2023-07-12 20:18:33,752 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5cf3649-e1b1-7bc3-d79c-380b4c3a55fc/hadoop.tmp.dir so I do NOT create it in target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea 2023-07-12 20:18:33,752 INFO [Listener at localhost/36071] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/cluster_c38682cf-d1fc-98f1-6545-cecd75b4d94e, deleteOnExit=true 2023-07-12 20:18:33,752 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 20:18:33,752 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/test.cache.data in system properties and HBase conf 2023-07-12 20:18:33,753 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 20:18:33,753 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/hadoop.log.dir in system properties and HBase conf 2023-07-12 20:18:33,753 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 20:18:33,753 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 20:18:33,753 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 20:18:33,753 DEBUG [Listener at localhost/36071] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 20:18:33,754 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 20:18:33,754 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 20:18:33,754 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 20:18:33,754 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 20:18:33,754 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 20:18:33,754 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 20:18:33,754 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 20:18:33,755 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 20:18:33,755 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 20:18:33,755 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/nfs.dump.dir in system properties and HBase conf 2023-07-12 20:18:33,755 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/java.io.tmpdir in system properties and HBase conf 2023-07-12 20:18:33,755 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 20:18:33,756 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 20:18:33,756 INFO [Listener at localhost/36071] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 20:18:33,761 WARN [Listener at localhost/36071] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 20:18:33,761 WARN [Listener at localhost/36071] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 20:18:33,778 DEBUG [Listener at localhost/36071-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1015b2f7032000a, quorum=127.0.0.1:51228, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-12 20:18:33,778 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1015b2f7032000a, quorum=127.0.0.1:51228, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-12 20:18:33,821 WARN [Listener at localhost/36071] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-12 20:18:33,891 WARN [Listener at localhost/36071] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 20:18:33,894 INFO [Listener at localhost/36071] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 20:18:33,903 INFO [Listener at localhost/36071] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/java.io.tmpdir/Jetty_localhost_32797_hdfs____xc8m80/webapp 2023-07-12 20:18:34,028 INFO [Listener at localhost/36071] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:32797 2023-07-12 20:18:34,033 WARN [Listener at localhost/36071] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 20:18:34,034 WARN [Listener at localhost/36071] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 20:18:34,119 WARN [Listener at localhost/33535] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 20:18:34,152 WARN [Listener at localhost/33535] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 20:18:34,158 WARN [Listener at localhost/33535] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 20:18:34,159 INFO [Listener at localhost/33535] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 20:18:34,168 INFO [Listener at localhost/33535] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/java.io.tmpdir/Jetty_localhost_36559_datanode____.kpt4n7/webapp 2023-07-12 20:18:34,307 INFO [Listener at localhost/33535] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36559 2023-07-12 20:18:34,333 WARN [Listener at localhost/42631] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 20:18:34,377 WARN [Listener at localhost/42631] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 20:18:34,383 WARN [Listener at localhost/42631] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 20:18:34,384 INFO [Listener at localhost/42631] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 20:18:34,394 INFO [Listener at localhost/42631] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/java.io.tmpdir/Jetty_localhost_43477_datanode____.4ydz6s/webapp 2023-07-12 20:18:34,485 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x56da44f2b3990a32: Processing first storage report for DS-abd35850-631f-4f10-8e0b-fec4a116a56d from datanode 95acbcfe-ab6d-43a9-8eec-5b1c105a926f 2023-07-12 20:18:34,485 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x56da44f2b3990a32: from storage DS-abd35850-631f-4f10-8e0b-fec4a116a56d node DatanodeRegistration(127.0.0.1:46611, datanodeUuid=95acbcfe-ab6d-43a9-8eec-5b1c105a926f, infoPort=43467, infoSecurePort=0, ipcPort=42631, storageInfo=lv=-57;cid=testClusterID;nsid=1451507850;c=1689193113764), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 20:18:34,487 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x56da44f2b3990a32: Processing first storage report for DS-a6ca1d35-d932-4827-baf0-f2748e7b657e from datanode 95acbcfe-ab6d-43a9-8eec-5b1c105a926f 2023-07-12 20:18:34,487 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x56da44f2b3990a32: from storage DS-a6ca1d35-d932-4827-baf0-f2748e7b657e node DatanodeRegistration(127.0.0.1:46611, datanodeUuid=95acbcfe-ab6d-43a9-8eec-5b1c105a926f, infoPort=43467, infoSecurePort=0, ipcPort=42631, storageInfo=lv=-57;cid=testClusterID;nsid=1451507850;c=1689193113764), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 20:18:34,537 INFO [Listener at localhost/42631] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43477 2023-07-12 20:18:34,552 WARN [Listener at localhost/46535] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 20:18:34,593 WARN [Listener at localhost/46535] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 20:18:34,606 WARN [Listener at localhost/46535] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 20:18:34,607 INFO [Listener at localhost/46535] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 20:18:34,620 INFO [Listener at localhost/46535] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/java.io.tmpdir/Jetty_localhost_40239_datanode____.5r0c8t/webapp 2023-07-12 20:18:34,737 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8e48ab86c1b583b6: Processing first storage report for DS-e8cbd3e1-84ba-4a04-8b70-bd1402b5bee9 from datanode be779b90-95d9-4dbe-b5f3-31421021f52e 2023-07-12 20:18:34,738 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8e48ab86c1b583b6: from storage DS-e8cbd3e1-84ba-4a04-8b70-bd1402b5bee9 node DatanodeRegistration(127.0.0.1:43425, datanodeUuid=be779b90-95d9-4dbe-b5f3-31421021f52e, infoPort=39021, infoSecurePort=0, ipcPort=46535, storageInfo=lv=-57;cid=testClusterID;nsid=1451507850;c=1689193113764), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 20:18:34,738 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8e48ab86c1b583b6: Processing first storage report for DS-ccdd7551-04be-4da9-b0c0-2ee51b7e6408 from datanode be779b90-95d9-4dbe-b5f3-31421021f52e 2023-07-12 20:18:34,738 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8e48ab86c1b583b6: from storage DS-ccdd7551-04be-4da9-b0c0-2ee51b7e6408 node DatanodeRegistration(127.0.0.1:43425, datanodeUuid=be779b90-95d9-4dbe-b5f3-31421021f52e, infoPort=39021, infoSecurePort=0, ipcPort=46535, storageInfo=lv=-57;cid=testClusterID;nsid=1451507850;c=1689193113764), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 20:18:34,758 INFO [Listener at localhost/46535] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40239 2023-07-12 20:18:34,779 WARN [Listener at localhost/38141] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 20:18:34,960 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf283957ccad57be8: Processing first storage report for DS-55dc4230-888f-48dc-bcb9-a1254afa6deb from datanode f790184d-e7dc-4536-813a-dd9a5c163b1d 2023-07-12 20:18:34,960 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf283957ccad57be8: from storage DS-55dc4230-888f-48dc-bcb9-a1254afa6deb node DatanodeRegistration(127.0.0.1:46277, datanodeUuid=f790184d-e7dc-4536-813a-dd9a5c163b1d, infoPort=41569, infoSecurePort=0, ipcPort=38141, storageInfo=lv=-57;cid=testClusterID;nsid=1451507850;c=1689193113764), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 20:18:34,960 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf283957ccad57be8: Processing first storage report for DS-fe7cf38d-6fbf-4d37-bcbe-2cd97449f487 from datanode f790184d-e7dc-4536-813a-dd9a5c163b1d 2023-07-12 20:18:34,960 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf283957ccad57be8: from storage DS-fe7cf38d-6fbf-4d37-bcbe-2cd97449f487 node DatanodeRegistration(127.0.0.1:46277, datanodeUuid=f790184d-e7dc-4536-813a-dd9a5c163b1d, infoPort=41569, infoSecurePort=0, ipcPort=38141, storageInfo=lv=-57;cid=testClusterID;nsid=1451507850;c=1689193113764), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 20:18:35,022 DEBUG [Listener at localhost/38141] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea 2023-07-12 20:18:35,039 INFO [Listener at localhost/38141] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/cluster_c38682cf-d1fc-98f1-6545-cecd75b4d94e/zookeeper_0, clientPort=52715, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/cluster_c38682cf-d1fc-98f1-6545-cecd75b4d94e/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/cluster_c38682cf-d1fc-98f1-6545-cecd75b4d94e/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 20:18:35,041 INFO [Listener at localhost/38141] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=52715 2023-07-12 20:18:35,041 INFO [Listener at localhost/38141] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:35,043 INFO [Listener at localhost/38141] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:35,092 INFO [Listener at localhost/38141] util.FSUtils(471): Created version file at hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5 with version=8 2023-07-12 20:18:35,092 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/hbase-staging 2023-07-12 20:18:35,093 DEBUG [Listener at localhost/38141] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 20:18:35,094 DEBUG [Listener at localhost/38141] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 20:18:35,094 DEBUG [Listener at localhost/38141] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 20:18:35,094 DEBUG [Listener at localhost/38141] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 20:18:35,095 INFO [Listener at localhost/38141] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 20:18:35,095 INFO [Listener at localhost/38141] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:35,096 INFO [Listener at localhost/38141] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:35,096 INFO [Listener at localhost/38141] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 20:18:35,096 INFO [Listener at localhost/38141] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:35,096 INFO [Listener at localhost/38141] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 20:18:35,096 INFO [Listener at localhost/38141] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 20:18:35,098 INFO [Listener at localhost/38141] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34685 2023-07-12 20:18:35,099 INFO [Listener at localhost/38141] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:35,100 INFO [Listener at localhost/38141] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:35,102 INFO [Listener at localhost/38141] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34685 connecting to ZooKeeper ensemble=127.0.0.1:52715 2023-07-12 20:18:35,123 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:346850x0, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 20:18:35,134 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34685-0x1015b2ff09e0000 connected 2023-07-12 20:18:35,176 DEBUG [Listener at localhost/38141] zookeeper.ZKUtil(164): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 20:18:35,180 DEBUG [Listener at localhost/38141] zookeeper.ZKUtil(164): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:35,181 DEBUG [Listener at localhost/38141] zookeeper.ZKUtil(164): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 20:18:35,187 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34685 2023-07-12 20:18:35,188 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34685 2023-07-12 20:18:35,191 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34685 2023-07-12 20:18:35,202 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34685 2023-07-12 20:18:35,203 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34685 2023-07-12 20:18:35,205 INFO [Listener at localhost/38141] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 20:18:35,205 INFO [Listener at localhost/38141] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 20:18:35,206 INFO [Listener at localhost/38141] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 20:18:35,206 INFO [Listener at localhost/38141] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 20:18:35,207 INFO [Listener at localhost/38141] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 20:18:35,207 INFO [Listener at localhost/38141] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 20:18:35,207 INFO [Listener at localhost/38141] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 20:18:35,208 INFO [Listener at localhost/38141] http.HttpServer(1146): Jetty bound to port 36947 2023-07-12 20:18:35,208 INFO [Listener at localhost/38141] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 20:18:35,224 INFO [Listener at localhost/38141] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:35,224 INFO [Listener at localhost/38141] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@ed4524b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/hadoop.log.dir/,AVAILABLE} 2023-07-12 20:18:35,225 INFO [Listener at localhost/38141] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:35,225 INFO [Listener at localhost/38141] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@745323b2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 20:18:35,368 INFO [Listener at localhost/38141] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 20:18:35,370 INFO [Listener at localhost/38141] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 20:18:35,370 INFO [Listener at localhost/38141] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 20:18:35,370 INFO [Listener at localhost/38141] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 20:18:35,372 INFO [Listener at localhost/38141] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:35,373 INFO [Listener at localhost/38141] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@504f9b7b{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/java.io.tmpdir/jetty-0_0_0_0-36947-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7469858376368443577/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 20:18:35,375 INFO [Listener at localhost/38141] server.AbstractConnector(333): Started ServerConnector@713fc71e{HTTP/1.1, (http/1.1)}{0.0.0.0:36947} 2023-07-12 20:18:35,375 INFO [Listener at localhost/38141] server.Server(415): Started @38114ms 2023-07-12 20:18:35,375 INFO [Listener at localhost/38141] master.HMaster(444): hbase.rootdir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5, hbase.cluster.distributed=false 2023-07-12 20:18:35,407 INFO [Listener at localhost/38141] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 20:18:35,407 INFO [Listener at localhost/38141] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:35,407 INFO [Listener at localhost/38141] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:35,407 INFO [Listener at localhost/38141] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 20:18:35,408 INFO [Listener at localhost/38141] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:35,408 INFO [Listener at localhost/38141] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 20:18:35,408 INFO [Listener at localhost/38141] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 20:18:35,410 INFO [Listener at localhost/38141] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45413 2023-07-12 20:18:35,410 INFO [Listener at localhost/38141] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 20:18:35,420 DEBUG [Listener at localhost/38141] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 20:18:35,421 INFO [Listener at localhost/38141] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:35,422 INFO [Listener at localhost/38141] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:35,423 INFO [Listener at localhost/38141] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45413 connecting to ZooKeeper ensemble=127.0.0.1:52715 2023-07-12 20:18:35,427 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:454130x0, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 20:18:35,428 DEBUG [Listener at localhost/38141] zookeeper.ZKUtil(164): regionserver:454130x0, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 20:18:35,429 DEBUG [Listener at localhost/38141] zookeeper.ZKUtil(164): regionserver:454130x0, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:35,429 DEBUG [Listener at localhost/38141] zookeeper.ZKUtil(164): regionserver:454130x0, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 20:18:35,434 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45413-0x1015b2ff09e0001 connected 2023-07-12 20:18:35,434 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45413 2023-07-12 20:18:35,434 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45413 2023-07-12 20:18:35,434 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45413 2023-07-12 20:18:35,439 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45413 2023-07-12 20:18:35,439 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45413 2023-07-12 20:18:35,441 INFO [Listener at localhost/38141] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 20:18:35,441 INFO [Listener at localhost/38141] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 20:18:35,441 INFO [Listener at localhost/38141] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 20:18:35,442 INFO [Listener at localhost/38141] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 20:18:35,442 INFO [Listener at localhost/38141] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 20:18:35,442 INFO [Listener at localhost/38141] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 20:18:35,442 INFO [Listener at localhost/38141] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 20:18:35,443 INFO [Listener at localhost/38141] http.HttpServer(1146): Jetty bound to port 35687 2023-07-12 20:18:35,444 INFO [Listener at localhost/38141] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 20:18:35,447 INFO [Listener at localhost/38141] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:35,447 INFO [Listener at localhost/38141] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@f6f98fa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/hadoop.log.dir/,AVAILABLE} 2023-07-12 20:18:35,448 INFO [Listener at localhost/38141] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:35,448 INFO [Listener at localhost/38141] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39caab19{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 20:18:35,575 INFO [Listener at localhost/38141] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 20:18:35,576 INFO [Listener at localhost/38141] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 20:18:35,576 INFO [Listener at localhost/38141] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 20:18:35,577 INFO [Listener at localhost/38141] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 20:18:35,578 INFO [Listener at localhost/38141] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:35,579 INFO [Listener at localhost/38141] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5231fbaa{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/java.io.tmpdir/jetty-0_0_0_0-35687-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5046140381193460957/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:35,580 INFO [Listener at localhost/38141] server.AbstractConnector(333): Started ServerConnector@1dd10cda{HTTP/1.1, (http/1.1)}{0.0.0.0:35687} 2023-07-12 20:18:35,581 INFO [Listener at localhost/38141] server.Server(415): Started @38320ms 2023-07-12 20:18:35,593 INFO [Listener at localhost/38141] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 20:18:35,593 INFO [Listener at localhost/38141] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:35,594 INFO [Listener at localhost/38141] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:35,594 INFO [Listener at localhost/38141] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 20:18:35,594 INFO [Listener at localhost/38141] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:35,594 INFO [Listener at localhost/38141] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 20:18:35,594 INFO [Listener at localhost/38141] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 20:18:35,595 INFO [Listener at localhost/38141] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39231 2023-07-12 20:18:35,596 INFO [Listener at localhost/38141] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 20:18:35,598 DEBUG [Listener at localhost/38141] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 20:18:35,599 INFO [Listener at localhost/38141] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:35,601 INFO [Listener at localhost/38141] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:35,603 INFO [Listener at localhost/38141] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39231 connecting to ZooKeeper ensemble=127.0.0.1:52715 2023-07-12 20:18:35,607 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:392310x0, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 20:18:35,608 DEBUG [Listener at localhost/38141] zookeeper.ZKUtil(164): regionserver:392310x0, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 20:18:35,608 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39231-0x1015b2ff09e0002 connected 2023-07-12 20:18:35,608 DEBUG [Listener at localhost/38141] zookeeper.ZKUtil(164): regionserver:39231-0x1015b2ff09e0002, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:35,609 DEBUG [Listener at localhost/38141] zookeeper.ZKUtil(164): regionserver:39231-0x1015b2ff09e0002, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 20:18:35,611 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39231 2023-07-12 20:18:35,613 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39231 2023-07-12 20:18:35,614 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39231 2023-07-12 20:18:35,614 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39231 2023-07-12 20:18:35,615 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39231 2023-07-12 20:18:35,617 INFO [Listener at localhost/38141] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 20:18:35,617 INFO [Listener at localhost/38141] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 20:18:35,617 INFO [Listener at localhost/38141] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 20:18:35,618 INFO [Listener at localhost/38141] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 20:18:35,618 INFO [Listener at localhost/38141] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 20:18:35,618 INFO [Listener at localhost/38141] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 20:18:35,618 INFO [Listener at localhost/38141] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 20:18:35,618 INFO [Listener at localhost/38141] http.HttpServer(1146): Jetty bound to port 45725 2023-07-12 20:18:35,619 INFO [Listener at localhost/38141] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 20:18:35,624 INFO [Listener at localhost/38141] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:35,624 INFO [Listener at localhost/38141] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@40e93b12{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/hadoop.log.dir/,AVAILABLE} 2023-07-12 20:18:35,624 INFO [Listener at localhost/38141] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:35,625 INFO [Listener at localhost/38141] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@24197d18{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 20:18:35,740 INFO [Listener at localhost/38141] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 20:18:35,741 INFO [Listener at localhost/38141] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 20:18:35,741 INFO [Listener at localhost/38141] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 20:18:35,742 INFO [Listener at localhost/38141] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 20:18:35,742 INFO [Listener at localhost/38141] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:35,743 INFO [Listener at localhost/38141] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4b529140{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/java.io.tmpdir/jetty-0_0_0_0-45725-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6625235469251404025/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:35,745 INFO [Listener at localhost/38141] server.AbstractConnector(333): Started ServerConnector@67684b2a{HTTP/1.1, (http/1.1)}{0.0.0.0:45725} 2023-07-12 20:18:35,745 INFO [Listener at localhost/38141] server.Server(415): Started @38484ms 2023-07-12 20:18:35,757 INFO [Listener at localhost/38141] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 20:18:35,757 INFO [Listener at localhost/38141] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:35,757 INFO [Listener at localhost/38141] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:35,757 INFO [Listener at localhost/38141] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 20:18:35,757 INFO [Listener at localhost/38141] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:35,757 INFO [Listener at localhost/38141] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 20:18:35,757 INFO [Listener at localhost/38141] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 20:18:35,759 INFO [Listener at localhost/38141] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39477 2023-07-12 20:18:35,759 INFO [Listener at localhost/38141] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 20:18:35,761 DEBUG [Listener at localhost/38141] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 20:18:35,761 INFO [Listener at localhost/38141] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:35,763 INFO [Listener at localhost/38141] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:35,764 INFO [Listener at localhost/38141] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39477 connecting to ZooKeeper ensemble=127.0.0.1:52715 2023-07-12 20:18:35,768 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:394770x0, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 20:18:35,770 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39477-0x1015b2ff09e0003 connected 2023-07-12 20:18:35,770 DEBUG [Listener at localhost/38141] zookeeper.ZKUtil(164): regionserver:39477-0x1015b2ff09e0003, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 20:18:35,770 DEBUG [Listener at localhost/38141] zookeeper.ZKUtil(164): regionserver:39477-0x1015b2ff09e0003, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:35,771 DEBUG [Listener at localhost/38141] zookeeper.ZKUtil(164): regionserver:39477-0x1015b2ff09e0003, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 20:18:35,772 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39477 2023-07-12 20:18:35,772 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39477 2023-07-12 20:18:35,774 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39477 2023-07-12 20:18:35,775 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39477 2023-07-12 20:18:35,776 DEBUG [Listener at localhost/38141] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39477 2023-07-12 20:18:35,778 INFO [Listener at localhost/38141] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 20:18:35,779 INFO [Listener at localhost/38141] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 20:18:35,779 INFO [Listener at localhost/38141] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 20:18:35,779 INFO [Listener at localhost/38141] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 20:18:35,779 INFO [Listener at localhost/38141] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 20:18:35,779 INFO [Listener at localhost/38141] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 20:18:35,780 INFO [Listener at localhost/38141] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 20:18:35,780 INFO [Listener at localhost/38141] http.HttpServer(1146): Jetty bound to port 41073 2023-07-12 20:18:35,780 INFO [Listener at localhost/38141] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 20:18:35,784 INFO [Listener at localhost/38141] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:35,784 INFO [Listener at localhost/38141] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5fd1460f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/hadoop.log.dir/,AVAILABLE} 2023-07-12 20:18:35,784 INFO [Listener at localhost/38141] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:35,784 INFO [Listener at localhost/38141] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@33eefdba{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 20:18:35,902 INFO [Listener at localhost/38141] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 20:18:35,903 INFO [Listener at localhost/38141] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 20:18:35,903 INFO [Listener at localhost/38141] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 20:18:35,903 INFO [Listener at localhost/38141] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 20:18:35,907 INFO [Listener at localhost/38141] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:35,909 INFO [Listener at localhost/38141] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@739ec01b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/java.io.tmpdir/jetty-0_0_0_0-41073-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8894358229586660875/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:35,910 INFO [Listener at localhost/38141] server.AbstractConnector(333): Started ServerConnector@9fc1965{HTTP/1.1, (http/1.1)}{0.0.0.0:41073} 2023-07-12 20:18:35,911 INFO [Listener at localhost/38141] server.Server(415): Started @38650ms 2023-07-12 20:18:35,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 20:18:35,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@11e3bfd8{HTTP/1.1, (http/1.1)}{0.0.0.0:40977} 2023-07-12 20:18:35,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @38660ms 2023-07-12 20:18:35,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,34685,1689193115094 2023-07-12 20:18:35,923 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 20:18:35,923 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,34685,1689193115094 2023-07-12 20:18:35,925 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:45413-0x1015b2ff09e0001, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 20:18:35,925 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:39231-0x1015b2ff09e0002, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 20:18:35,925 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 20:18:35,925 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:39477-0x1015b2ff09e0003, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 20:18:35,926 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:35,927 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 20:18:35,931 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,34685,1689193115094 from backup master directory 2023-07-12 20:18:35,931 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 20:18:35,933 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,34685,1689193115094 2023-07-12 20:18:35,933 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 20:18:35,933 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 20:18:35,933 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,34685,1689193115094 2023-07-12 20:18:35,963 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/hbase.id with ID: b583ba2a-0cba-47f2-8190-da9c86c7fd73 2023-07-12 20:18:35,977 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:35,981 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:35,994 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x4f478e3b to 127.0.0.1:52715 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:36,002 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@23116641, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:36,002 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:36,003 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 20:18:36,003 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 20:18:36,005 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/MasterData/data/master/store-tmp 2023-07-12 20:18:36,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:36,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 20:18:36,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:36,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:36,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 20:18:36,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:36,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:36,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 20:18:36,018 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/MasterData/WALs/jenkins-hbase4.apache.org,34685,1689193115094 2023-07-12 20:18:36,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34685%2C1689193115094, suffix=, logDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/MasterData/WALs/jenkins-hbase4.apache.org,34685,1689193115094, archiveDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/MasterData/oldWALs, maxLogs=10 2023-07-12 20:18:36,044 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43425,DS-e8cbd3e1-84ba-4a04-8b70-bd1402b5bee9,DISK] 2023-07-12 20:18:36,047 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46277,DS-55dc4230-888f-48dc-bcb9-a1254afa6deb,DISK] 2023-07-12 20:18:36,051 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46611,DS-abd35850-631f-4f10-8e0b-fec4a116a56d,DISK] 2023-07-12 20:18:36,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/MasterData/WALs/jenkins-hbase4.apache.org,34685,1689193115094/jenkins-hbase4.apache.org%2C34685%2C1689193115094.1689193116021 2023-07-12 20:18:36,053 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43425,DS-e8cbd3e1-84ba-4a04-8b70-bd1402b5bee9,DISK], DatanodeInfoWithStorage[127.0.0.1:46277,DS-55dc4230-888f-48dc-bcb9-a1254afa6deb,DISK], DatanodeInfoWithStorage[127.0.0.1:46611,DS-abd35850-631f-4f10-8e0b-fec4a116a56d,DISK]] 2023-07-12 20:18:36,053 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:36,053 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:36,053 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:36,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:36,058 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:36,060 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 20:18:36,061 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 20:18:36,061 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:36,062 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:36,063 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:36,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:36,068 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:36,069 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11626097920, jitterRate=0.08276474475860596}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:36,069 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 20:18:36,069 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 20:18:36,070 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 20:18:36,070 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 20:18:36,070 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 20:18:36,071 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-12 20:18:36,071 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-12 20:18:36,071 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 20:18:36,075 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 20:18:36,076 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 20:18:36,077 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 20:18:36,077 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 20:18:36,077 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 20:18:36,080 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:36,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 20:18:36,080 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 20:18:36,081 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 20:18:36,082 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:39477-0x1015b2ff09e0003, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:36,083 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:39231-0x1015b2ff09e0002, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:36,083 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:36,083 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:45413-0x1015b2ff09e0001, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:36,083 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:36,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,34685,1689193115094, sessionid=0x1015b2ff09e0000, setting cluster-up flag (Was=false) 2023-07-12 20:18:36,089 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:36,094 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 20:18:36,095 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34685,1689193115094 2023-07-12 20:18:36,098 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:36,102 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 20:18:36,103 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34685,1689193115094 2023-07-12 20:18:36,104 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.hbase-snapshot/.tmp 2023-07-12 20:18:36,105 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 20:18:36,105 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 20:18:36,106 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 20:18:36,107 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34685,1689193115094] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 20:18:36,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 20:18:36,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-12 20:18:36,109 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 20:18:36,112 INFO [RS:0;jenkins-hbase4:45413] regionserver.HRegionServer(951): ClusterId : b583ba2a-0cba-47f2-8190-da9c86c7fd73 2023-07-12 20:18:36,119 DEBUG [RS:0;jenkins-hbase4:45413] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 20:18:36,120 INFO [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer(951): ClusterId : b583ba2a-0cba-47f2-8190-da9c86c7fd73 2023-07-12 20:18:36,121 INFO [RS:2;jenkins-hbase4:39477] regionserver.HRegionServer(951): ClusterId : b583ba2a-0cba-47f2-8190-da9c86c7fd73 2023-07-12 20:18:36,121 DEBUG [RS:1;jenkins-hbase4:39231] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 20:18:36,122 DEBUG [RS:2;jenkins-hbase4:39477] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 20:18:36,123 DEBUG [RS:0;jenkins-hbase4:45413] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 20:18:36,123 DEBUG [RS:0;jenkins-hbase4:45413] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 20:18:36,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 20:18:36,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 20:18:36,126 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 20:18:36,126 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 20:18:36,126 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 20:18:36,126 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 20:18:36,126 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 20:18:36,126 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 20:18:36,126 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-12 20:18:36,126 DEBUG [RS:2;jenkins-hbase4:39477] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 20:18:36,127 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,126 DEBUG [RS:1;jenkins-hbase4:39231] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 20:18:36,127 DEBUG [RS:1;jenkins-hbase4:39231] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 20:18:36,127 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 20:18:36,127 DEBUG [RS:2;jenkins-hbase4:39477] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 20:18:36,127 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,127 DEBUG [RS:0;jenkins-hbase4:45413] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 20:18:36,130 DEBUG [RS:1;jenkins-hbase4:39231] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 20:18:36,130 DEBUG [RS:2;jenkins-hbase4:39477] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 20:18:36,135 DEBUG [RS:1;jenkins-hbase4:39231] zookeeper.ReadOnlyZKClient(139): Connect 0x1651ff90 to 127.0.0.1:52715 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:36,135 DEBUG [RS:0;jenkins-hbase4:45413] zookeeper.ReadOnlyZKClient(139): Connect 0x11102c23 to 127.0.0.1:52715 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:36,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689193146136 2023-07-12 20:18:36,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 20:18:36,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 20:18:36,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 20:18:36,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 20:18:36,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 20:18:36,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 20:18:36,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,138 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 20:18:36,140 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 20:18:36,140 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 20:18:36,140 DEBUG [RS:2;jenkins-hbase4:39477] zookeeper.ReadOnlyZKClient(139): Connect 0x37cb1eee to 127.0.0.1:52715 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:36,143 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 20:18:36,143 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 20:18:36,144 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:36,148 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 20:18:36,148 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 20:18:36,152 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689193116152,5,FailOnTimeoutGroup] 2023-07-12 20:18:36,154 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689193116152,5,FailOnTimeoutGroup] 2023-07-12 20:18:36,154 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,157 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 20:18:36,158 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,158 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,163 DEBUG [RS:1;jenkins-hbase4:39231] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1896132a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:36,163 DEBUG [RS:1;jenkins-hbase4:39231] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1ca994e0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 20:18:36,163 DEBUG [RS:2;jenkins-hbase4:39477] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3d33759e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:36,164 DEBUG [RS:2;jenkins-hbase4:39477] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7831fc5c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 20:18:36,165 DEBUG [RS:0;jenkins-hbase4:45413] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1fd7c071, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:36,165 DEBUG [RS:0;jenkins-hbase4:45413] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1be84753, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 20:18:36,176 DEBUG [RS:1;jenkins-hbase4:39231] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:39231 2023-07-12 20:18:36,176 INFO [RS:1;jenkins-hbase4:39231] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 20:18:36,176 INFO [RS:1;jenkins-hbase4:39231] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 20:18:36,176 DEBUG [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 20:18:36,177 DEBUG [RS:0;jenkins-hbase4:45413] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:45413 2023-07-12 20:18:36,177 INFO [RS:0;jenkins-hbase4:45413] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 20:18:36,177 INFO [RS:0;jenkins-hbase4:45413] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 20:18:36,177 DEBUG [RS:0;jenkins-hbase4:45413] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 20:18:36,177 INFO [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34685,1689193115094 with isa=jenkins-hbase4.apache.org/172.31.14.131:39231, startcode=1689193115593 2023-07-12 20:18:36,177 DEBUG [RS:1;jenkins-hbase4:39231] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 20:18:36,177 INFO [RS:0;jenkins-hbase4:45413] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34685,1689193115094 with isa=jenkins-hbase4.apache.org/172.31.14.131:45413, startcode=1689193115406 2023-07-12 20:18:36,178 DEBUG [RS:0;jenkins-hbase4:45413] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 20:18:36,178 DEBUG [RS:2;jenkins-hbase4:39477] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:39477 2023-07-12 20:18:36,178 INFO [RS:2;jenkins-hbase4:39477] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 20:18:36,178 INFO [RS:2;jenkins-hbase4:39477] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 20:18:36,178 DEBUG [RS:2;jenkins-hbase4:39477] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 20:18:36,178 INFO [RS:2;jenkins-hbase4:39477] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34685,1689193115094 with isa=jenkins-hbase4.apache.org/172.31.14.131:39477, startcode=1689193115756 2023-07-12 20:18:36,179 DEBUG [RS:2;jenkins-hbase4:39477] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 20:18:36,182 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56341, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 20:18:36,182 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44447, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 20:18:36,182 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49691, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 20:18:36,183 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34685] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39231,1689193115593 2023-07-12 20:18:36,183 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34685,1689193115094] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 20:18:36,184 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34685,1689193115094] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 20:18:36,184 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34685] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45413,1689193115406 2023-07-12 20:18:36,184 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34685] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:36,185 DEBUG [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5 2023-07-12 20:18:36,185 DEBUG [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33535 2023-07-12 20:18:36,185 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34685,1689193115094] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 20:18:36,185 DEBUG [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36947 2023-07-12 20:18:36,185 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34685,1689193115094] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 20:18:36,185 DEBUG [RS:2;jenkins-hbase4:39477] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5 2023-07-12 20:18:36,185 DEBUG [RS:2;jenkins-hbase4:39477] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33535 2023-07-12 20:18:36,185 DEBUG [RS:2;jenkins-hbase4:39477] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36947 2023-07-12 20:18:36,187 DEBUG [RS:0;jenkins-hbase4:45413] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5 2023-07-12 20:18:36,187 DEBUG [RS:0;jenkins-hbase4:45413] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33535 2023-07-12 20:18:36,187 DEBUG [RS:0;jenkins-hbase4:45413] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36947 2023-07-12 20:18:36,191 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:36,195 DEBUG [RS:2;jenkins-hbase4:39477] zookeeper.ZKUtil(162): regionserver:39477-0x1015b2ff09e0003, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:36,195 DEBUG [RS:0;jenkins-hbase4:45413] zookeeper.ZKUtil(162): regionserver:45413-0x1015b2ff09e0001, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45413,1689193115406 2023-07-12 20:18:36,195 WARN [RS:2;jenkins-hbase4:39477] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 20:18:36,195 WARN [RS:0;jenkins-hbase4:45413] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 20:18:36,195 INFO [RS:2;jenkins-hbase4:39477] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 20:18:36,195 INFO [RS:0;jenkins-hbase4:45413] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 20:18:36,195 DEBUG [RS:2;jenkins-hbase4:39477] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/WALs/jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:36,195 DEBUG [RS:0;jenkins-hbase4:45413] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/WALs/jenkins-hbase4.apache.org,45413,1689193115406 2023-07-12 20:18:36,196 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45413,1689193115406] 2023-07-12 20:18:36,196 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39477,1689193115756] 2023-07-12 20:18:36,196 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39231,1689193115593] 2023-07-12 20:18:36,199 DEBUG [RS:1;jenkins-hbase4:39231] zookeeper.ZKUtil(162): regionserver:39231-0x1015b2ff09e0002, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39231,1689193115593 2023-07-12 20:18:36,199 WARN [RS:1;jenkins-hbase4:39231] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 20:18:36,199 INFO [RS:1;jenkins-hbase4:39231] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 20:18:36,199 DEBUG [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/WALs/jenkins-hbase4.apache.org,39231,1689193115593 2023-07-12 20:18:36,199 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:36,200 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:36,200 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5 2023-07-12 20:18:36,217 DEBUG [RS:0;jenkins-hbase4:45413] zookeeper.ZKUtil(162): regionserver:45413-0x1015b2ff09e0001, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45413,1689193115406 2023-07-12 20:18:36,217 DEBUG [RS:2;jenkins-hbase4:39477] zookeeper.ZKUtil(162): regionserver:39477-0x1015b2ff09e0003, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45413,1689193115406 2023-07-12 20:18:36,217 DEBUG [RS:0;jenkins-hbase4:45413] zookeeper.ZKUtil(162): regionserver:45413-0x1015b2ff09e0001, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39231,1689193115593 2023-07-12 20:18:36,217 DEBUG [RS:2;jenkins-hbase4:39477] zookeeper.ZKUtil(162): regionserver:39477-0x1015b2ff09e0003, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39231,1689193115593 2023-07-12 20:18:36,217 DEBUG [RS:0;jenkins-hbase4:45413] zookeeper.ZKUtil(162): regionserver:45413-0x1015b2ff09e0001, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:36,217 DEBUG [RS:2;jenkins-hbase4:39477] zookeeper.ZKUtil(162): regionserver:39477-0x1015b2ff09e0003, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:36,218 DEBUG [RS:0;jenkins-hbase4:45413] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 20:18:36,219 DEBUG [RS:2;jenkins-hbase4:39477] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 20:18:36,219 INFO [RS:0;jenkins-hbase4:45413] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 20:18:36,219 INFO [RS:2;jenkins-hbase4:39477] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 20:18:36,224 DEBUG [RS:1;jenkins-hbase4:39231] zookeeper.ZKUtil(162): regionserver:39231-0x1015b2ff09e0002, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45413,1689193115406 2023-07-12 20:18:36,224 DEBUG [RS:1;jenkins-hbase4:39231] zookeeper.ZKUtil(162): regionserver:39231-0x1015b2ff09e0002, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39231,1689193115593 2023-07-12 20:18:36,225 DEBUG [RS:1;jenkins-hbase4:39231] zookeeper.ZKUtil(162): regionserver:39231-0x1015b2ff09e0002, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:36,225 DEBUG [RS:1;jenkins-hbase4:39231] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 20:18:36,227 INFO [RS:0;jenkins-hbase4:45413] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 20:18:36,227 INFO [RS:1;jenkins-hbase4:39231] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 20:18:36,231 INFO [RS:1;jenkins-hbase4:39231] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 20:18:36,235 INFO [RS:2;jenkins-hbase4:39477] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 20:18:36,235 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:36,239 INFO [RS:2;jenkins-hbase4:39477] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 20:18:36,239 INFO [RS:2;jenkins-hbase4:39477] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,239 INFO [RS:1;jenkins-hbase4:39231] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 20:18:36,239 INFO [RS:1;jenkins-hbase4:39231] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,239 INFO [RS:2;jenkins-hbase4:39477] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 20:18:36,239 INFO [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 20:18:36,243 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 20:18:36,239 INFO [RS:0;jenkins-hbase4:45413] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 20:18:36,244 INFO [RS:0;jenkins-hbase4:45413] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,244 INFO [RS:0;jenkins-hbase4:45413] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 20:18:36,245 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/info 2023-07-12 20:18:36,246 INFO [RS:1;jenkins-hbase4:39231] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,246 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 20:18:36,247 DEBUG [RS:1;jenkins-hbase4:39231] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,247 DEBUG [RS:1;jenkins-hbase4:39231] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,247 DEBUG [RS:1;jenkins-hbase4:39231] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,247 INFO [RS:2;jenkins-hbase4:39477] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,247 DEBUG [RS:1;jenkins-hbase4:39231] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,247 DEBUG [RS:2;jenkins-hbase4:39477] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,247 INFO [RS:0;jenkins-hbase4:45413] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,247 DEBUG [RS:2;jenkins-hbase4:39477] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,247 DEBUG [RS:1;jenkins-hbase4:39231] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,247 DEBUG [RS:2;jenkins-hbase4:39477] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,247 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:36,247 DEBUG [RS:2;jenkins-hbase4:39477] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,247 DEBUG [RS:0;jenkins-hbase4:45413] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,248 DEBUG [RS:2;jenkins-hbase4:39477] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,248 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 20:18:36,248 DEBUG [RS:2;jenkins-hbase4:39477] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 20:18:36,247 DEBUG [RS:1;jenkins-hbase4:39231] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 20:18:36,248 DEBUG [RS:2;jenkins-hbase4:39477] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,248 DEBUG [RS:1;jenkins-hbase4:39231] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,248 DEBUG [RS:2;jenkins-hbase4:39477] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,248 DEBUG [RS:1;jenkins-hbase4:39231] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,248 DEBUG [RS:2;jenkins-hbase4:39477] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,248 DEBUG [RS:1;jenkins-hbase4:39231] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,248 DEBUG [RS:0;jenkins-hbase4:45413] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,248 DEBUG [RS:1;jenkins-hbase4:39231] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,248 DEBUG [RS:2;jenkins-hbase4:39477] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,248 DEBUG [RS:0;jenkins-hbase4:45413] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,248 DEBUG [RS:0;jenkins-hbase4:45413] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,248 DEBUG [RS:0;jenkins-hbase4:45413] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,249 DEBUG [RS:0;jenkins-hbase4:45413] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 20:18:36,249 DEBUG [RS:0;jenkins-hbase4:45413] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,249 DEBUG [RS:0;jenkins-hbase4:45413] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,249 DEBUG [RS:0;jenkins-hbase4:45413] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,249 DEBUG [RS:0;jenkins-hbase4:45413] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:36,249 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/rep_barrier 2023-07-12 20:18:36,249 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 20:18:36,250 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:36,250 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 20:18:36,251 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/table 2023-07-12 20:18:36,252 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 20:18:36,252 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:36,255 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740 2023-07-12 20:18:36,256 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740 2023-07-12 20:18:36,259 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 20:18:36,260 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 20:18:36,262 INFO [RS:1;jenkins-hbase4:39231] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,263 INFO [RS:1;jenkins-hbase4:39231] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,263 INFO [RS:1;jenkins-hbase4:39231] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,263 INFO [RS:1;jenkins-hbase4:39231] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,267 INFO [RS:0;jenkins-hbase4:45413] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,267 INFO [RS:0;jenkins-hbase4:45413] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,267 INFO [RS:0;jenkins-hbase4:45413] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,267 INFO [RS:0;jenkins-hbase4:45413] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,267 INFO [RS:2;jenkins-hbase4:39477] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,267 INFO [RS:2;jenkins-hbase4:39477] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,267 INFO [RS:2;jenkins-hbase4:39477] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,267 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:36,267 INFO [RS:2;jenkins-hbase4:39477] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,268 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11655975680, jitterRate=0.08554732799530029}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 20:18:36,268 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 20:18:36,268 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 20:18:36,268 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 20:18:36,268 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 20:18:36,268 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 20:18:36,268 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 20:18:36,270 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 20:18:36,270 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 20:18:36,271 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 20:18:36,271 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 20:18:36,271 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 20:18:36,272 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 20:18:36,283 INFO [RS:1;jenkins-hbase4:39231] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 20:18:36,283 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 20:18:36,283 INFO [RS:1;jenkins-hbase4:39231] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39231,1689193115593-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,284 INFO [RS:2;jenkins-hbase4:39477] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 20:18:36,284 INFO [RS:2;jenkins-hbase4:39477] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39477,1689193115756-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,289 INFO [RS:0;jenkins-hbase4:45413] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 20:18:36,289 INFO [RS:0;jenkins-hbase4:45413] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45413,1689193115406-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,297 INFO [RS:1;jenkins-hbase4:39231] regionserver.Replication(203): jenkins-hbase4.apache.org,39231,1689193115593 started 2023-07-12 20:18:36,297 INFO [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39231,1689193115593, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39231, sessionid=0x1015b2ff09e0002 2023-07-12 20:18:36,297 INFO [RS:2;jenkins-hbase4:39477] regionserver.Replication(203): jenkins-hbase4.apache.org,39477,1689193115756 started 2023-07-12 20:18:36,297 DEBUG [RS:1;jenkins-hbase4:39231] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 20:18:36,297 INFO [RS:2;jenkins-hbase4:39477] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39477,1689193115756, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39477, sessionid=0x1015b2ff09e0003 2023-07-12 20:18:36,297 DEBUG [RS:1;jenkins-hbase4:39231] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39231,1689193115593 2023-07-12 20:18:36,297 DEBUG [RS:2;jenkins-hbase4:39477] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 20:18:36,297 DEBUG [RS:2;jenkins-hbase4:39477] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:36,297 DEBUG [RS:2;jenkins-hbase4:39477] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39477,1689193115756' 2023-07-12 20:18:36,297 DEBUG [RS:1;jenkins-hbase4:39231] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39231,1689193115593' 2023-07-12 20:18:36,297 DEBUG [RS:2;jenkins-hbase4:39477] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 20:18:36,297 DEBUG [RS:1;jenkins-hbase4:39231] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 20:18:36,298 DEBUG [RS:1;jenkins-hbase4:39231] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 20:18:36,298 DEBUG [RS:2;jenkins-hbase4:39477] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 20:18:36,298 DEBUG [RS:1;jenkins-hbase4:39231] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 20:18:36,298 DEBUG [RS:1;jenkins-hbase4:39231] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 20:18:36,298 DEBUG [RS:2;jenkins-hbase4:39477] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 20:18:36,298 DEBUG [RS:2;jenkins-hbase4:39477] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 20:18:36,298 DEBUG [RS:2;jenkins-hbase4:39477] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:36,298 DEBUG [RS:1;jenkins-hbase4:39231] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39231,1689193115593 2023-07-12 20:18:36,298 DEBUG [RS:2;jenkins-hbase4:39477] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39477,1689193115756' 2023-07-12 20:18:36,298 DEBUG [RS:2;jenkins-hbase4:39477] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 20:18:36,298 DEBUG [RS:1;jenkins-hbase4:39231] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39231,1689193115593' 2023-07-12 20:18:36,298 DEBUG [RS:1;jenkins-hbase4:39231] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 20:18:36,299 DEBUG [RS:1;jenkins-hbase4:39231] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 20:18:36,299 DEBUG [RS:2;jenkins-hbase4:39477] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 20:18:36,299 DEBUG [RS:1;jenkins-hbase4:39231] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 20:18:36,299 DEBUG [RS:2;jenkins-hbase4:39477] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 20:18:36,299 INFO [RS:1;jenkins-hbase4:39231] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 20:18:36,299 INFO [RS:2;jenkins-hbase4:39477] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 20:18:36,300 INFO [RS:0;jenkins-hbase4:45413] regionserver.Replication(203): jenkins-hbase4.apache.org,45413,1689193115406 started 2023-07-12 20:18:36,300 INFO [RS:0;jenkins-hbase4:45413] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45413,1689193115406, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45413, sessionid=0x1015b2ff09e0001 2023-07-12 20:18:36,300 DEBUG [RS:0;jenkins-hbase4:45413] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 20:18:36,300 DEBUG [RS:0;jenkins-hbase4:45413] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45413,1689193115406 2023-07-12 20:18:36,301 DEBUG [RS:0;jenkins-hbase4:45413] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45413,1689193115406' 2023-07-12 20:18:36,301 DEBUG [RS:0;jenkins-hbase4:45413] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 20:18:36,301 DEBUG [RS:0;jenkins-hbase4:45413] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 20:18:36,301 DEBUG [RS:0;jenkins-hbase4:45413] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 20:18:36,301 DEBUG [RS:0;jenkins-hbase4:45413] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 20:18:36,302 DEBUG [RS:0;jenkins-hbase4:45413] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45413,1689193115406 2023-07-12 20:18:36,302 INFO [RS:2;jenkins-hbase4:39477] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,302 DEBUG [RS:0;jenkins-hbase4:45413] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45413,1689193115406' 2023-07-12 20:18:36,302 DEBUG [RS:0;jenkins-hbase4:45413] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 20:18:36,302 INFO [RS:1;jenkins-hbase4:39231] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,302 DEBUG [RS:2;jenkins-hbase4:39477] zookeeper.ZKUtil(398): regionserver:39477-0x1015b2ff09e0003, quorum=127.0.0.1:52715, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 20:18:36,302 DEBUG [RS:0;jenkins-hbase4:45413] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 20:18:36,302 DEBUG [RS:1;jenkins-hbase4:39231] zookeeper.ZKUtil(398): regionserver:39231-0x1015b2ff09e0002, quorum=127.0.0.1:52715, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 20:18:36,302 INFO [RS:2;jenkins-hbase4:39477] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 20:18:36,302 INFO [RS:1;jenkins-hbase4:39231] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 20:18:36,303 DEBUG [RS:0;jenkins-hbase4:45413] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 20:18:36,303 INFO [RS:0;jenkins-hbase4:45413] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 20:18:36,303 INFO [RS:0;jenkins-hbase4:45413] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,303 INFO [RS:2;jenkins-hbase4:39477] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,303 INFO [RS:1;jenkins-hbase4:39231] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,303 INFO [RS:2;jenkins-hbase4:39477] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,303 DEBUG [RS:0;jenkins-hbase4:45413] zookeeper.ZKUtil(398): regionserver:45413-0x1015b2ff09e0001, quorum=127.0.0.1:52715, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 20:18:36,303 INFO [RS:1;jenkins-hbase4:39231] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,303 INFO [RS:0;jenkins-hbase4:45413] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 20:18:36,303 INFO [RS:0;jenkins-hbase4:45413] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,303 INFO [RS:0;jenkins-hbase4:45413] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,407 INFO [RS:0;jenkins-hbase4:45413] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45413%2C1689193115406, suffix=, logDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/WALs/jenkins-hbase4.apache.org,45413,1689193115406, archiveDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/oldWALs, maxLogs=32 2023-07-12 20:18:36,407 INFO [RS:2;jenkins-hbase4:39477] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39477%2C1689193115756, suffix=, logDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/WALs/jenkins-hbase4.apache.org,39477,1689193115756, archiveDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/oldWALs, maxLogs=32 2023-07-12 20:18:36,410 INFO [RS:1;jenkins-hbase4:39231] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39231%2C1689193115593, suffix=, logDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/WALs/jenkins-hbase4.apache.org,39231,1689193115593, archiveDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/oldWALs, maxLogs=32 2023-07-12 20:18:36,429 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46277,DS-55dc4230-888f-48dc-bcb9-a1254afa6deb,DISK] 2023-07-12 20:18:36,429 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43425,DS-e8cbd3e1-84ba-4a04-8b70-bd1402b5bee9,DISK] 2023-07-12 20:18:36,431 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46611,DS-abd35850-631f-4f10-8e0b-fec4a116a56d,DISK] 2023-07-12 20:18:36,434 DEBUG [jenkins-hbase4:34685] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 20:18:36,435 DEBUG [jenkins-hbase4:34685] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:36,435 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43425,DS-e8cbd3e1-84ba-4a04-8b70-bd1402b5bee9,DISK] 2023-07-12 20:18:36,435 DEBUG [jenkins-hbase4:34685] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:36,435 DEBUG [jenkins-hbase4:34685] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:36,435 DEBUG [jenkins-hbase4:34685] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:36,435 DEBUG [jenkins-hbase4:34685] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:36,436 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46277,DS-55dc4230-888f-48dc-bcb9-a1254afa6deb,DISK] 2023-07-12 20:18:36,436 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46611,DS-abd35850-631f-4f10-8e0b-fec4a116a56d,DISK] 2023-07-12 20:18:36,439 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45413,1689193115406, state=OPENING 2023-07-12 20:18:36,441 INFO [RS:1;jenkins-hbase4:39231] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/WALs/jenkins-hbase4.apache.org,39231,1689193115593/jenkins-hbase4.apache.org%2C39231%2C1689193115593.1689193116411 2023-07-12 20:18:36,441 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 20:18:36,442 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:36,443 INFO [RS:0;jenkins-hbase4:45413] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/WALs/jenkins-hbase4.apache.org,45413,1689193115406/jenkins-hbase4.apache.org%2C45413%2C1689193115406.1689193116410 2023-07-12 20:18:36,443 DEBUG [RS:1;jenkins-hbase4:39231] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46277,DS-55dc4230-888f-48dc-bcb9-a1254afa6deb,DISK], DatanodeInfoWithStorage[127.0.0.1:46611,DS-abd35850-631f-4f10-8e0b-fec4a116a56d,DISK], DatanodeInfoWithStorage[127.0.0.1:43425,DS-e8cbd3e1-84ba-4a04-8b70-bd1402b5bee9,DISK]] 2023-07-12 20:18:36,444 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45413,1689193115406}] 2023-07-12 20:18:36,444 DEBUG [RS:0;jenkins-hbase4:45413] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46277,DS-55dc4230-888f-48dc-bcb9-a1254afa6deb,DISK], DatanodeInfoWithStorage[127.0.0.1:43425,DS-e8cbd3e1-84ba-4a04-8b70-bd1402b5bee9,DISK], DatanodeInfoWithStorage[127.0.0.1:46611,DS-abd35850-631f-4f10-8e0b-fec4a116a56d,DISK]] 2023-07-12 20:18:36,444 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 20:18:36,452 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46277,DS-55dc4230-888f-48dc-bcb9-a1254afa6deb,DISK] 2023-07-12 20:18:36,452 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46611,DS-abd35850-631f-4f10-8e0b-fec4a116a56d,DISK] 2023-07-12 20:18:36,452 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43425,DS-e8cbd3e1-84ba-4a04-8b70-bd1402b5bee9,DISK] 2023-07-12 20:18:36,455 INFO [RS:2;jenkins-hbase4:39477] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/WALs/jenkins-hbase4.apache.org,39477,1689193115756/jenkins-hbase4.apache.org%2C39477%2C1689193115756.1689193116410 2023-07-12 20:18:36,457 DEBUG [RS:2;jenkins-hbase4:39477] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46277,DS-55dc4230-888f-48dc-bcb9-a1254afa6deb,DISK], DatanodeInfoWithStorage[127.0.0.1:46611,DS-abd35850-631f-4f10-8e0b-fec4a116a56d,DISK], DatanodeInfoWithStorage[127.0.0.1:43425,DS-e8cbd3e1-84ba-4a04-8b70-bd1402b5bee9,DISK]] 2023-07-12 20:18:36,604 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45413,1689193115406 2023-07-12 20:18:36,604 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 20:18:36,607 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36834, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 20:18:36,612 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 20:18:36,612 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 20:18:36,614 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45413%2C1689193115406.meta, suffix=.meta, logDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/WALs/jenkins-hbase4.apache.org,45413,1689193115406, archiveDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/oldWALs, maxLogs=32 2023-07-12 20:18:36,632 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46611,DS-abd35850-631f-4f10-8e0b-fec4a116a56d,DISK] 2023-07-12 20:18:36,632 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43425,DS-e8cbd3e1-84ba-4a04-8b70-bd1402b5bee9,DISK] 2023-07-12 20:18:36,633 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46277,DS-55dc4230-888f-48dc-bcb9-a1254afa6deb,DISK] 2023-07-12 20:18:36,639 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/WALs/jenkins-hbase4.apache.org,45413,1689193115406/jenkins-hbase4.apache.org%2C45413%2C1689193115406.meta.1689193116614.meta 2023-07-12 20:18:36,639 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43425,DS-e8cbd3e1-84ba-4a04-8b70-bd1402b5bee9,DISK], DatanodeInfoWithStorage[127.0.0.1:46611,DS-abd35850-631f-4f10-8e0b-fec4a116a56d,DISK], DatanodeInfoWithStorage[127.0.0.1:46277,DS-55dc4230-888f-48dc-bcb9-a1254afa6deb,DISK]] 2023-07-12 20:18:36,640 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:36,640 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 20:18:36,640 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 20:18:36,640 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 20:18:36,640 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 20:18:36,640 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:36,640 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 20:18:36,640 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 20:18:36,642 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 20:18:36,643 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/info 2023-07-12 20:18:36,643 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/info 2023-07-12 20:18:36,644 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 20:18:36,644 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:36,644 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 20:18:36,645 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/rep_barrier 2023-07-12 20:18:36,645 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/rep_barrier 2023-07-12 20:18:36,646 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 20:18:36,646 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:36,647 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 20:18:36,647 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/table 2023-07-12 20:18:36,647 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/table 2023-07-12 20:18:36,648 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 20:18:36,648 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:36,649 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740 2023-07-12 20:18:36,650 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740 2023-07-12 20:18:36,652 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 20:18:36,653 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 20:18:36,654 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11927053120, jitterRate=0.11079338192939758}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 20:18:36,654 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 20:18:36,655 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689193116604 2023-07-12 20:18:36,659 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 20:18:36,659 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 20:18:36,660 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45413,1689193115406, state=OPEN 2023-07-12 20:18:36,662 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 20:18:36,662 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 20:18:36,663 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 20:18:36,664 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45413,1689193115406 in 218 msec 2023-07-12 20:18:36,665 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 20:18:36,665 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 393 msec 2023-07-12 20:18:36,666 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 557 msec 2023-07-12 20:18:36,666 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689193116666, completionTime=-1 2023-07-12 20:18:36,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 20:18:36,667 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 20:18:36,669 DEBUG [hconnection-0x5c1e6ec9-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 20:18:36,671 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36836, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 20:18:36,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 20:18:36,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689193176673 2023-07-12 20:18:36,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689193236673 2023-07-12 20:18:36,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-07-12 20:18:36,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34685,1689193115094-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34685,1689193115094-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34685,1689193115094-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:34685, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:36,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 20:18:36,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:36,679 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 20:18:36,679 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 20:18:36,680 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:36,681 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 20:18:36,682 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/hbase/namespace/9a76e2737d01d4ca92918a15f4f819f9 2023-07-12 20:18:36,683 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/hbase/namespace/9a76e2737d01d4ca92918a15f4f819f9 empty. 2023-07-12 20:18:36,683 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/hbase/namespace/9a76e2737d01d4ca92918a15f4f819f9 2023-07-12 20:18:36,683 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 20:18:36,697 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:36,698 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9a76e2737d01d4ca92918a15f4f819f9, NAME => 'hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp 2023-07-12 20:18:36,707 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:36,708 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 9a76e2737d01d4ca92918a15f4f819f9, disabling compactions & flushes 2023-07-12 20:18:36,708 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9. 2023-07-12 20:18:36,708 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9. 2023-07-12 20:18:36,708 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9. after waiting 0 ms 2023-07-12 20:18:36,708 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9. 2023-07-12 20:18:36,708 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9. 2023-07-12 20:18:36,708 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 9a76e2737d01d4ca92918a15f4f819f9: 2023-07-12 20:18:36,710 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 20:18:36,711 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689193116711"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193116711"}]},"ts":"1689193116711"} 2023-07-12 20:18:36,714 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 20:18:36,714 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 20:18:36,715 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193116714"}]},"ts":"1689193116714"} 2023-07-12 20:18:36,716 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 20:18:36,717 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34685,1689193115094] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:36,719 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34685,1689193115094] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 20:18:36,720 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:36,720 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:36,721 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 20:18:36,721 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:36,721 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:36,721 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:36,721 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:36,721 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=9a76e2737d01d4ca92918a15f4f819f9, ASSIGN}] 2023-07-12 20:18:36,722 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=9a76e2737d01d4ca92918a15f4f819f9, ASSIGN 2023-07-12 20:18:36,723 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/hbase/rsgroup/b65ba95095d02890ccd7aa08c32b30cd 2023-07-12 20:18:36,723 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=9a76e2737d01d4ca92918a15f4f819f9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39477,1689193115756; forceNewPlan=false, retain=false 2023-07-12 20:18:36,723 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/hbase/rsgroup/b65ba95095d02890ccd7aa08c32b30cd empty. 2023-07-12 20:18:36,724 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/hbase/rsgroup/b65ba95095d02890ccd7aa08c32b30cd 2023-07-12 20:18:36,724 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 20:18:36,736 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:36,737 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => b65ba95095d02890ccd7aa08c32b30cd, NAME => 'hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp 2023-07-12 20:18:36,746 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:36,747 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing b65ba95095d02890ccd7aa08c32b30cd, disabling compactions & flushes 2023-07-12 20:18:36,747 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd. 2023-07-12 20:18:36,747 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd. 2023-07-12 20:18:36,747 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd. after waiting 0 ms 2023-07-12 20:18:36,747 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd. 2023-07-12 20:18:36,747 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd. 2023-07-12 20:18:36,747 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for b65ba95095d02890ccd7aa08c32b30cd: 2023-07-12 20:18:36,749 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 20:18:36,750 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689193116750"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193116750"}]},"ts":"1689193116750"} 2023-07-12 20:18:36,751 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 20:18:36,752 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 20:18:36,752 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193116752"}]},"ts":"1689193116752"} 2023-07-12 20:18:36,753 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 20:18:36,757 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:36,757 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:36,757 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:36,757 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:36,757 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:36,757 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=b65ba95095d02890ccd7aa08c32b30cd, ASSIGN}] 2023-07-12 20:18:36,759 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=b65ba95095d02890ccd7aa08c32b30cd, ASSIGN 2023-07-12 20:18:36,759 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=b65ba95095d02890ccd7aa08c32b30cd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39231,1689193115593; forceNewPlan=false, retain=false 2023-07-12 20:18:36,760 INFO [jenkins-hbase4:34685] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 20:18:36,762 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=9a76e2737d01d4ca92918a15f4f819f9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:36,762 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689193116761"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193116761"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193116761"}]},"ts":"1689193116761"} 2023-07-12 20:18:36,762 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=b65ba95095d02890ccd7aa08c32b30cd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39231,1689193115593 2023-07-12 20:18:36,762 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689193116762"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193116762"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193116762"}]},"ts":"1689193116762"} 2023-07-12 20:18:36,763 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 9a76e2737d01d4ca92918a15f4f819f9, server=jenkins-hbase4.apache.org,39477,1689193115756}] 2023-07-12 20:18:36,764 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure b65ba95095d02890ccd7aa08c32b30cd, server=jenkins-hbase4.apache.org,39231,1689193115593}] 2023-07-12 20:18:36,917 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:36,917 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39231,1689193115593 2023-07-12 20:18:36,917 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 20:18:36,917 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 20:18:36,920 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57294, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 20:18:36,920 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55638, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 20:18:36,924 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9. 2023-07-12 20:18:36,924 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd. 2023-07-12 20:18:36,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9a76e2737d01d4ca92918a15f4f819f9, NAME => 'hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:36,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b65ba95095d02890ccd7aa08c32b30cd, NAME => 'hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:36,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 9a76e2737d01d4ca92918a15f4f819f9 2023-07-12 20:18:36,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:36,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 20:18:36,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9a76e2737d01d4ca92918a15f4f819f9 2023-07-12 20:18:36,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd. service=MultiRowMutationService 2023-07-12 20:18:36,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9a76e2737d01d4ca92918a15f4f819f9 2023-07-12 20:18:36,925 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 20:18:36,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup b65ba95095d02890ccd7aa08c32b30cd 2023-07-12 20:18:36,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:36,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b65ba95095d02890ccd7aa08c32b30cd 2023-07-12 20:18:36,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b65ba95095d02890ccd7aa08c32b30cd 2023-07-12 20:18:36,926 INFO [StoreOpener-9a76e2737d01d4ca92918a15f4f819f9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 9a76e2737d01d4ca92918a15f4f819f9 2023-07-12 20:18:36,927 INFO [StoreOpener-b65ba95095d02890ccd7aa08c32b30cd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region b65ba95095d02890ccd7aa08c32b30cd 2023-07-12 20:18:36,928 DEBUG [StoreOpener-9a76e2737d01d4ca92918a15f4f819f9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/namespace/9a76e2737d01d4ca92918a15f4f819f9/info 2023-07-12 20:18:36,928 DEBUG [StoreOpener-9a76e2737d01d4ca92918a15f4f819f9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/namespace/9a76e2737d01d4ca92918a15f4f819f9/info 2023-07-12 20:18:36,928 DEBUG [StoreOpener-b65ba95095d02890ccd7aa08c32b30cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/rsgroup/b65ba95095d02890ccd7aa08c32b30cd/m 2023-07-12 20:18:36,928 DEBUG [StoreOpener-b65ba95095d02890ccd7aa08c32b30cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/rsgroup/b65ba95095d02890ccd7aa08c32b30cd/m 2023-07-12 20:18:36,928 INFO [StoreOpener-9a76e2737d01d4ca92918a15f4f819f9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9a76e2737d01d4ca92918a15f4f819f9 columnFamilyName info 2023-07-12 20:18:36,928 INFO [StoreOpener-b65ba95095d02890ccd7aa08c32b30cd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b65ba95095d02890ccd7aa08c32b30cd columnFamilyName m 2023-07-12 20:18:36,929 INFO [StoreOpener-9a76e2737d01d4ca92918a15f4f819f9-1] regionserver.HStore(310): Store=9a76e2737d01d4ca92918a15f4f819f9/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:36,929 INFO [StoreOpener-b65ba95095d02890ccd7aa08c32b30cd-1] regionserver.HStore(310): Store=b65ba95095d02890ccd7aa08c32b30cd/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:36,930 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/namespace/9a76e2737d01d4ca92918a15f4f819f9 2023-07-12 20:18:36,930 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/rsgroup/b65ba95095d02890ccd7aa08c32b30cd 2023-07-12 20:18:36,930 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/namespace/9a76e2737d01d4ca92918a15f4f819f9 2023-07-12 20:18:36,930 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/rsgroup/b65ba95095d02890ccd7aa08c32b30cd 2023-07-12 20:18:36,933 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9a76e2737d01d4ca92918a15f4f819f9 2023-07-12 20:18:36,934 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b65ba95095d02890ccd7aa08c32b30cd 2023-07-12 20:18:36,944 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/namespace/9a76e2737d01d4ca92918a15f4f819f9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:36,944 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/rsgroup/b65ba95095d02890ccd7aa08c32b30cd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:36,945 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9a76e2737d01d4ca92918a15f4f819f9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11169744320, jitterRate=0.04026350378990173}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:36,945 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b65ba95095d02890ccd7aa08c32b30cd; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4e761846, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:36,945 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9a76e2737d01d4ca92918a15f4f819f9: 2023-07-12 20:18:36,945 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b65ba95095d02890ccd7aa08c32b30cd: 2023-07-12 20:18:36,946 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd., pid=9, masterSystemTime=1689193116917 2023-07-12 20:18:36,946 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9., pid=8, masterSystemTime=1689193116917 2023-07-12 20:18:36,949 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9. 2023-07-12 20:18:36,950 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9. 2023-07-12 20:18:36,950 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=9a76e2737d01d4ca92918a15f4f819f9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:36,951 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689193116950"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193116950"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193116950"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193116950"}]},"ts":"1689193116950"} 2023-07-12 20:18:36,951 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd. 2023-07-12 20:18:36,951 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd. 2023-07-12 20:18:36,952 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=b65ba95095d02890ccd7aa08c32b30cd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39231,1689193115593 2023-07-12 20:18:36,952 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689193116952"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193116952"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193116952"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193116952"}]},"ts":"1689193116952"} 2023-07-12 20:18:36,955 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-12 20:18:36,956 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 9a76e2737d01d4ca92918a15f4f819f9, server=jenkins-hbase4.apache.org,39477,1689193115756 in 190 msec 2023-07-12 20:18:36,957 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-12 20:18:36,957 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure b65ba95095d02890ccd7aa08c32b30cd, server=jenkins-hbase4.apache.org,39231,1689193115593 in 190 msec 2023-07-12 20:18:36,959 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-12 20:18:36,959 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=9a76e2737d01d4ca92918a15f4f819f9, ASSIGN in 235 msec 2023-07-12 20:18:36,959 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-12 20:18:36,959 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=b65ba95095d02890ccd7aa08c32b30cd, ASSIGN in 200 msec 2023-07-12 20:18:36,960 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 20:18:36,960 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193116960"}]},"ts":"1689193116960"} 2023-07-12 20:18:36,960 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 20:18:36,960 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193116960"}]},"ts":"1689193116960"} 2023-07-12 20:18:36,961 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 20:18:36,961 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 20:18:36,963 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 20:18:36,965 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 285 msec 2023-07-12 20:18:36,965 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 20:18:36,966 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 248 msec 2023-07-12 20:18:36,980 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 20:18:36,981 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 20:18:36,981 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:36,985 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 20:18:36,987 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57302, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 20:18:36,991 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 20:18:36,997 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 20:18:37,000 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 9 msec 2023-07-12 20:18:37,002 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 20:18:37,009 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 20:18:37,012 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-07-12 20:18:37,022 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34685,1689193115094] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 20:18:37,024 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55650, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 20:18:37,025 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34685,1689193115094] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 20:18:37,025 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34685,1689193115094] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 20:18:37,027 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 20:18:37,030 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:37,030 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34685,1689193115094] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:37,031 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 20:18:37,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.098sec 2023-07-12 20:18:37,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-12 20:18:37,032 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:37,032 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34685,1689193115094] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 20:18:37,032 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-12 20:18:37,032 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-12 20:18:37,034 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:37,035 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 20:18:37,035 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34685,1689193115094] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 20:18:37,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-12 20:18:37,036 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/hbase/quota/0d9aae173d0dba6d69d7f47c5f9bbb35 2023-07-12 20:18:37,037 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/hbase/quota/0d9aae173d0dba6d69d7f47c5f9bbb35 empty. 2023-07-12 20:18:37,037 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/hbase/quota/0d9aae173d0dba6d69d7f47c5f9bbb35 2023-07-12 20:18:37,037 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-12 20:18:37,040 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-12 20:18:37,041 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-12 20:18:37,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:37,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:37,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 20:18:37,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 20:18:37,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34685,1689193115094-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 20:18:37,044 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34685,1689193115094-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 20:18:37,046 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 20:18:37,055 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:37,056 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0d9aae173d0dba6d69d7f47c5f9bbb35, NAME => 'hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp 2023-07-12 20:18:37,067 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:37,067 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 0d9aae173d0dba6d69d7f47c5f9bbb35, disabling compactions & flushes 2023-07-12 20:18:37,067 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35. 2023-07-12 20:18:37,067 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35. 2023-07-12 20:18:37,067 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35. after waiting 0 ms 2023-07-12 20:18:37,067 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35. 2023-07-12 20:18:37,067 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35. 2023-07-12 20:18:37,067 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 0d9aae173d0dba6d69d7f47c5f9bbb35: 2023-07-12 20:18:37,070 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 20:18:37,070 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689193117070"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193117070"}]},"ts":"1689193117070"} 2023-07-12 20:18:37,072 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 20:18:37,072 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 20:18:37,072 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193117072"}]},"ts":"1689193117072"} 2023-07-12 20:18:37,073 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-12 20:18:37,076 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:37,076 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:37,076 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:37,077 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:37,077 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:37,077 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=0d9aae173d0dba6d69d7f47c5f9bbb35, ASSIGN}] 2023-07-12 20:18:37,077 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=0d9aae173d0dba6d69d7f47c5f9bbb35, ASSIGN 2023-07-12 20:18:37,078 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=0d9aae173d0dba6d69d7f47c5f9bbb35, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39231,1689193115593; forceNewPlan=false, retain=false 2023-07-12 20:18:37,120 DEBUG [Listener at localhost/38141] zookeeper.ReadOnlyZKClient(139): Connect 0x368055c6 to 127.0.0.1:52715 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:37,126 DEBUG [Listener at localhost/38141] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@266ef640, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:37,127 DEBUG [hconnection-0xaaac910-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 20:18:37,129 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36850, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 20:18:37,130 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,34685,1689193115094 2023-07-12 20:18:37,131 INFO [Listener at localhost/38141] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:37,133 DEBUG [Listener at localhost/38141] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 20:18:37,135 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51884, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 20:18:37,139 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 20:18:37,139 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:37,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-12 20:18:37,140 DEBUG [Listener at localhost/38141] zookeeper.ReadOnlyZKClient(139): Connect 0x6aa639d0 to 127.0.0.1:52715 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:37,145 DEBUG [Listener at localhost/38141] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@36988fdd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:37,145 INFO [Listener at localhost/38141] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:52715 2023-07-12 20:18:37,148 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 20:18:37,149 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1015b2ff09e000a connected 2023-07-12 20:18:37,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-12 20:18:37,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-12 20:18:37,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-12 20:18:37,164 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 20:18:37,167 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 14 msec 2023-07-12 20:18:37,228 INFO [jenkins-hbase4:34685] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 20:18:37,230 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=0d9aae173d0dba6d69d7f47c5f9bbb35, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39231,1689193115593 2023-07-12 20:18:37,230 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689193117230"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193117230"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193117230"}]},"ts":"1689193117230"} 2023-07-12 20:18:37,231 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; OpenRegionProcedure 0d9aae173d0dba6d69d7f47c5f9bbb35, server=jenkins-hbase4.apache.org,39231,1689193115593}] 2023-07-12 20:18:37,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-12 20:18:37,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:37,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-12 20:18:37,267 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:37,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 16 2023-07-12 20:18:37,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 20:18:37,269 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:37,270 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 20:18:37,272 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 20:18:37,273 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/np1/table1/e4a2989ecc09669ceb7af41657955723 2023-07-12 20:18:37,274 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/np1/table1/e4a2989ecc09669ceb7af41657955723 empty. 2023-07-12 20:18:37,274 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/np1/table1/e4a2989ecc09669ceb7af41657955723 2023-07-12 20:18:37,274 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-12 20:18:37,288 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:37,289 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => e4a2989ecc09669ceb7af41657955723, NAME => 'np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp 2023-07-12 20:18:37,299 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:37,299 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing e4a2989ecc09669ceb7af41657955723, disabling compactions & flushes 2023-07-12 20:18:37,299 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723. 2023-07-12 20:18:37,299 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723. 2023-07-12 20:18:37,299 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723. after waiting 0 ms 2023-07-12 20:18:37,299 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723. 2023-07-12 20:18:37,299 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723. 2023-07-12 20:18:37,299 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for e4a2989ecc09669ceb7af41657955723: 2023-07-12 20:18:37,302 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 20:18:37,303 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689193117302"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193117302"}]},"ts":"1689193117302"} 2023-07-12 20:18:37,304 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 20:18:37,304 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 20:18:37,305 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193117305"}]},"ts":"1689193117305"} 2023-07-12 20:18:37,306 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-12 20:18:37,310 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:37,310 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:37,311 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:37,311 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:37,311 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:37,311 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=e4a2989ecc09669ceb7af41657955723, ASSIGN}] 2023-07-12 20:18:37,312 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=e4a2989ecc09669ceb7af41657955723, ASSIGN 2023-07-12 20:18:37,312 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=e4a2989ecc09669ceb7af41657955723, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39477,1689193115756; forceNewPlan=false, retain=false 2023-07-12 20:18:37,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 20:18:37,386 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35. 2023-07-12 20:18:37,386 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0d9aae173d0dba6d69d7f47c5f9bbb35, NAME => 'hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:37,387 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 0d9aae173d0dba6d69d7f47c5f9bbb35 2023-07-12 20:18:37,387 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:37,387 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0d9aae173d0dba6d69d7f47c5f9bbb35 2023-07-12 20:18:37,387 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0d9aae173d0dba6d69d7f47c5f9bbb35 2023-07-12 20:18:37,388 INFO [StoreOpener-0d9aae173d0dba6d69d7f47c5f9bbb35-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 0d9aae173d0dba6d69d7f47c5f9bbb35 2023-07-12 20:18:37,389 DEBUG [StoreOpener-0d9aae173d0dba6d69d7f47c5f9bbb35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/quota/0d9aae173d0dba6d69d7f47c5f9bbb35/q 2023-07-12 20:18:37,389 DEBUG [StoreOpener-0d9aae173d0dba6d69d7f47c5f9bbb35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/quota/0d9aae173d0dba6d69d7f47c5f9bbb35/q 2023-07-12 20:18:37,390 INFO [StoreOpener-0d9aae173d0dba6d69d7f47c5f9bbb35-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0d9aae173d0dba6d69d7f47c5f9bbb35 columnFamilyName q 2023-07-12 20:18:37,390 INFO [StoreOpener-0d9aae173d0dba6d69d7f47c5f9bbb35-1] regionserver.HStore(310): Store=0d9aae173d0dba6d69d7f47c5f9bbb35/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:37,390 INFO [StoreOpener-0d9aae173d0dba6d69d7f47c5f9bbb35-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 0d9aae173d0dba6d69d7f47c5f9bbb35 2023-07-12 20:18:37,392 DEBUG [StoreOpener-0d9aae173d0dba6d69d7f47c5f9bbb35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/quota/0d9aae173d0dba6d69d7f47c5f9bbb35/u 2023-07-12 20:18:37,392 DEBUG [StoreOpener-0d9aae173d0dba6d69d7f47c5f9bbb35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/quota/0d9aae173d0dba6d69d7f47c5f9bbb35/u 2023-07-12 20:18:37,392 INFO [StoreOpener-0d9aae173d0dba6d69d7f47c5f9bbb35-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0d9aae173d0dba6d69d7f47c5f9bbb35 columnFamilyName u 2023-07-12 20:18:37,393 INFO [StoreOpener-0d9aae173d0dba6d69d7f47c5f9bbb35-1] regionserver.HStore(310): Store=0d9aae173d0dba6d69d7f47c5f9bbb35/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:37,394 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/quota/0d9aae173d0dba6d69d7f47c5f9bbb35 2023-07-12 20:18:37,394 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/quota/0d9aae173d0dba6d69d7f47c5f9bbb35 2023-07-12 20:18:37,396 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-12 20:18:37,397 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0d9aae173d0dba6d69d7f47c5f9bbb35 2023-07-12 20:18:37,399 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/quota/0d9aae173d0dba6d69d7f47c5f9bbb35/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:37,399 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0d9aae173d0dba6d69d7f47c5f9bbb35; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10062102560, jitterRate=-0.06289367377758026}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-12 20:18:37,400 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0d9aae173d0dba6d69d7f47c5f9bbb35: 2023-07-12 20:18:37,400 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35., pid=15, masterSystemTime=1689193117383 2023-07-12 20:18:37,401 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35. 2023-07-12 20:18:37,401 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35. 2023-07-12 20:18:37,402 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=0d9aae173d0dba6d69d7f47c5f9bbb35, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39231,1689193115593 2023-07-12 20:18:37,402 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689193117402"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193117402"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193117402"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193117402"}]},"ts":"1689193117402"} 2023-07-12 20:18:37,405 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-12 20:18:37,405 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; OpenRegionProcedure 0d9aae173d0dba6d69d7f47c5f9bbb35, server=jenkins-hbase4.apache.org,39231,1689193115593 in 172 msec 2023-07-12 20:18:37,406 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-12 20:18:37,406 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=0d9aae173d0dba6d69d7f47c5f9bbb35, ASSIGN in 328 msec 2023-07-12 20:18:37,407 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 20:18:37,407 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193117407"}]},"ts":"1689193117407"} 2023-07-12 20:18:37,408 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-12 20:18:37,410 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 20:18:37,411 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 378 msec 2023-07-12 20:18:37,463 INFO [jenkins-hbase4:34685] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 20:18:37,464 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=e4a2989ecc09669ceb7af41657955723, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:37,464 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689193117464"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193117464"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193117464"}]},"ts":"1689193117464"} 2023-07-12 20:18:37,465 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure e4a2989ecc09669ceb7af41657955723, server=jenkins-hbase4.apache.org,39477,1689193115756}] 2023-07-12 20:18:37,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 20:18:37,620 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723. 2023-07-12 20:18:37,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e4a2989ecc09669ceb7af41657955723, NAME => 'np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:37,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 e4a2989ecc09669ceb7af41657955723 2023-07-12 20:18:37,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:37,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e4a2989ecc09669ceb7af41657955723 2023-07-12 20:18:37,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e4a2989ecc09669ceb7af41657955723 2023-07-12 20:18:37,622 INFO [StoreOpener-e4a2989ecc09669ceb7af41657955723-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region e4a2989ecc09669ceb7af41657955723 2023-07-12 20:18:37,624 DEBUG [StoreOpener-e4a2989ecc09669ceb7af41657955723-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/np1/table1/e4a2989ecc09669ceb7af41657955723/fam1 2023-07-12 20:18:37,624 DEBUG [StoreOpener-e4a2989ecc09669ceb7af41657955723-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/np1/table1/e4a2989ecc09669ceb7af41657955723/fam1 2023-07-12 20:18:37,624 INFO [StoreOpener-e4a2989ecc09669ceb7af41657955723-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e4a2989ecc09669ceb7af41657955723 columnFamilyName fam1 2023-07-12 20:18:37,624 INFO [StoreOpener-e4a2989ecc09669ceb7af41657955723-1] regionserver.HStore(310): Store=e4a2989ecc09669ceb7af41657955723/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:37,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/np1/table1/e4a2989ecc09669ceb7af41657955723 2023-07-12 20:18:37,626 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/np1/table1/e4a2989ecc09669ceb7af41657955723 2023-07-12 20:18:37,628 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e4a2989ecc09669ceb7af41657955723 2023-07-12 20:18:37,630 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/np1/table1/e4a2989ecc09669ceb7af41657955723/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:37,630 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e4a2989ecc09669ceb7af41657955723; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10015901760, jitterRate=-0.06719645857810974}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:37,630 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e4a2989ecc09669ceb7af41657955723: 2023-07-12 20:18:37,631 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723., pid=18, masterSystemTime=1689193117617 2023-07-12 20:18:37,632 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723. 2023-07-12 20:18:37,632 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723. 2023-07-12 20:18:37,633 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=e4a2989ecc09669ceb7af41657955723, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:37,633 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689193117633"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193117633"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193117633"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193117633"}]},"ts":"1689193117633"} 2023-07-12 20:18:37,635 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-12 20:18:37,635 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure e4a2989ecc09669ceb7af41657955723, server=jenkins-hbase4.apache.org,39477,1689193115756 in 169 msec 2023-07-12 20:18:37,637 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-12 20:18:37,637 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=e4a2989ecc09669ceb7af41657955723, ASSIGN in 324 msec 2023-07-12 20:18:37,637 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 20:18:37,638 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193117637"}]},"ts":"1689193117637"} 2023-07-12 20:18:37,638 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-12 20:18:37,641 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 20:18:37,642 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; CreateTableProcedure table=np1:table1 in 377 msec 2023-07-12 20:18:37,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 20:18:37,872 INFO [Listener at localhost/38141] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 16 completed 2023-07-12 20:18:37,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:37,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-12 20:18:37,876 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:37,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-12 20:18:37,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 20:18:37,895 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=21 msec 2023-07-12 20:18:37,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 20:18:37,980 INFO [Listener at localhost/38141] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-12 20:18:37,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:37,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:37,982 INFO [Listener at localhost/38141] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-12 20:18:37,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-12 20:18:37,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-12 20:18:37,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 20:18:37,986 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193117986"}]},"ts":"1689193117986"} 2023-07-12 20:18:37,987 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-12 20:18:37,989 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-12 20:18:37,990 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=e4a2989ecc09669ceb7af41657955723, UNASSIGN}] 2023-07-12 20:18:37,991 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=e4a2989ecc09669ceb7af41657955723, UNASSIGN 2023-07-12 20:18:37,991 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=e4a2989ecc09669ceb7af41657955723, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:37,992 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689193117991"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193117991"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193117991"}]},"ts":"1689193117991"} 2023-07-12 20:18:37,993 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure e4a2989ecc09669ceb7af41657955723, server=jenkins-hbase4.apache.org,39477,1689193115756}] 2023-07-12 20:18:38,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 20:18:38,146 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e4a2989ecc09669ceb7af41657955723 2023-07-12 20:18:38,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e4a2989ecc09669ceb7af41657955723, disabling compactions & flushes 2023-07-12 20:18:38,147 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723. 2023-07-12 20:18:38,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723. 2023-07-12 20:18:38,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723. after waiting 0 ms 2023-07-12 20:18:38,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723. 2023-07-12 20:18:38,151 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/np1/table1/e4a2989ecc09669ceb7af41657955723/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:38,152 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723. 2023-07-12 20:18:38,152 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e4a2989ecc09669ceb7af41657955723: 2023-07-12 20:18:38,154 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e4a2989ecc09669ceb7af41657955723 2023-07-12 20:18:38,154 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=e4a2989ecc09669ceb7af41657955723, regionState=CLOSED 2023-07-12 20:18:38,154 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689193118154"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193118154"}]},"ts":"1689193118154"} 2023-07-12 20:18:38,157 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-12 20:18:38,157 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure e4a2989ecc09669ceb7af41657955723, server=jenkins-hbase4.apache.org,39477,1689193115756 in 162 msec 2023-07-12 20:18:38,158 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-12 20:18:38,158 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=e4a2989ecc09669ceb7af41657955723, UNASSIGN in 167 msec 2023-07-12 20:18:38,158 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193118158"}]},"ts":"1689193118158"} 2023-07-12 20:18:38,160 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-12 20:18:38,162 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-12 20:18:38,164 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 180 msec 2023-07-12 20:18:38,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 20:18:38,288 INFO [Listener at localhost/38141] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-12 20:18:38,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-12 20:18:38,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-12 20:18:38,291 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 20:18:38,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-12 20:18:38,292 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 20:18:38,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:38,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 20:18:38,296 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/np1/table1/e4a2989ecc09669ceb7af41657955723 2023-07-12 20:18:38,298 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/np1/table1/e4a2989ecc09669ceb7af41657955723/fam1, FileablePath, hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/np1/table1/e4a2989ecc09669ceb7af41657955723/recovered.edits] 2023-07-12 20:18:38,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-12 20:18:38,303 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/np1/table1/e4a2989ecc09669ceb7af41657955723/recovered.edits/4.seqid to hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/archive/data/np1/table1/e4a2989ecc09669ceb7af41657955723/recovered.edits/4.seqid 2023-07-12 20:18:38,303 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/.tmp/data/np1/table1/e4a2989ecc09669ceb7af41657955723 2023-07-12 20:18:38,303 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-12 20:18:38,305 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 20:18:38,307 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-12 20:18:38,310 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-12 20:18:38,310 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 20:18:38,310 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-12 20:18:38,311 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193118311"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:38,312 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 20:18:38,312 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e4a2989ecc09669ceb7af41657955723, NAME => 'np1:table1,,1689193117264.e4a2989ecc09669ceb7af41657955723.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 20:18:38,312 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-12 20:18:38,312 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689193118312"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:38,313 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-12 20:18:38,317 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 20:18:38,318 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 28 msec 2023-07-12 20:18:38,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-12 20:18:38,399 INFO [Listener at localhost/38141] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-12 20:18:38,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-12 20:18:38,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-12 20:18:38,413 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 20:18:38,415 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 20:18:38,418 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 20:18:38,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-12 20:18:38,419 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-12 20:18:38,419 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 20:18:38,420 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 20:18:38,421 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 20:18:38,422 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 17 msec 2023-07-12 20:18:38,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34685] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-12 20:18:38,520 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 20:18:38,520 INFO [Listener at localhost/38141] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 20:18:38,520 DEBUG [Listener at localhost/38141] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x368055c6 to 127.0.0.1:52715 2023-07-12 20:18:38,520 DEBUG [Listener at localhost/38141] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:38,520 DEBUG [Listener at localhost/38141] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 20:18:38,520 DEBUG [Listener at localhost/38141] util.JVMClusterUtil(257): Found active master hash=665535132, stopped=false 2023-07-12 20:18:38,520 DEBUG [Listener at localhost/38141] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 20:18:38,520 DEBUG [Listener at localhost/38141] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 20:18:38,520 DEBUG [Listener at localhost/38141] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-12 20:18:38,520 INFO [Listener at localhost/38141] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,34685,1689193115094 2023-07-12 20:18:38,522 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:39477-0x1015b2ff09e0003, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:38,522 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:38,522 INFO [Listener at localhost/38141] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 20:18:38,522 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:39231-0x1015b2ff09e0002, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:38,522 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:45413-0x1015b2ff09e0001, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:38,522 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:38,525 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39477-0x1015b2ff09e0003, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:38,525 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:38,525 DEBUG [Listener at localhost/38141] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4f478e3b to 127.0.0.1:52715 2023-07-12 20:18:38,525 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39231-0x1015b2ff09e0002, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:38,525 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45413-0x1015b2ff09e0001, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:38,525 DEBUG [Listener at localhost/38141] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:38,526 INFO [Listener at localhost/38141] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45413,1689193115406' ***** 2023-07-12 20:18:38,526 INFO [Listener at localhost/38141] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 20:18:38,526 INFO [Listener at localhost/38141] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39231,1689193115593' ***** 2023-07-12 20:18:38,526 INFO [Listener at localhost/38141] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 20:18:38,526 INFO [RS:0;jenkins-hbase4:45413] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 20:18:38,526 INFO [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 20:18:38,526 INFO [Listener at localhost/38141] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39477,1689193115756' ***** 2023-07-12 20:18:38,526 INFO [Listener at localhost/38141] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 20:18:38,527 INFO [RS:2;jenkins-hbase4:39477] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 20:18:38,538 INFO [RS:2;jenkins-hbase4:39477] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@739ec01b{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:38,538 INFO [RS:0;jenkins-hbase4:45413] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5231fbaa{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:38,538 INFO [RS:1;jenkins-hbase4:39231] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4b529140{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:38,539 INFO [RS:2;jenkins-hbase4:39477] server.AbstractConnector(383): Stopped ServerConnector@9fc1965{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 20:18:38,539 INFO [RS:1;jenkins-hbase4:39231] server.AbstractConnector(383): Stopped ServerConnector@67684b2a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 20:18:38,539 INFO [RS:0;jenkins-hbase4:45413] server.AbstractConnector(383): Stopped ServerConnector@1dd10cda{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 20:18:38,539 INFO [RS:1;jenkins-hbase4:39231] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 20:18:38,539 INFO [RS:2;jenkins-hbase4:39477] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 20:18:38,539 INFO [RS:0;jenkins-hbase4:45413] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 20:18:38,542 INFO [RS:1;jenkins-hbase4:39231] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@24197d18{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 20:18:38,542 INFO [RS:0;jenkins-hbase4:45413] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39caab19{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 20:18:38,542 INFO [RS:2;jenkins-hbase4:39477] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@33eefdba{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 20:18:38,542 INFO [RS:0;jenkins-hbase4:45413] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@f6f98fa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/hadoop.log.dir/,STOPPED} 2023-07-12 20:18:38,542 INFO [RS:1;jenkins-hbase4:39231] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@40e93b12{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/hadoop.log.dir/,STOPPED} 2023-07-12 20:18:38,542 INFO [RS:2;jenkins-hbase4:39477] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5fd1460f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/hadoop.log.dir/,STOPPED} 2023-07-12 20:18:38,543 INFO [RS:0;jenkins-hbase4:45413] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 20:18:38,543 INFO [RS:0;jenkins-hbase4:45413] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 20:18:38,543 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 20:18:38,543 INFO [RS:0;jenkins-hbase4:45413] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 20:18:38,544 INFO [RS:0;jenkins-hbase4:45413] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45413,1689193115406 2023-07-12 20:18:38,544 INFO [RS:1;jenkins-hbase4:39231] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 20:18:38,544 DEBUG [RS:0;jenkins-hbase4:45413] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x11102c23 to 127.0.0.1:52715 2023-07-12 20:18:38,544 INFO [RS:1;jenkins-hbase4:39231] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 20:18:38,544 INFO [RS:2;jenkins-hbase4:39477] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 20:18:38,545 INFO [RS:1;jenkins-hbase4:39231] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 20:18:38,545 INFO [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer(3305): Received CLOSE for 0d9aae173d0dba6d69d7f47c5f9bbb35 2023-07-12 20:18:38,544 DEBUG [RS:0;jenkins-hbase4:45413] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:38,544 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 20:18:38,545 INFO [RS:0;jenkins-hbase4:45413] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 20:18:38,545 INFO [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer(3305): Received CLOSE for b65ba95095d02890ccd7aa08c32b30cd 2023-07-12 20:18:38,545 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 20:18:38,545 INFO [RS:2;jenkins-hbase4:39477] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 20:18:38,546 INFO [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39231,1689193115593 2023-07-12 20:18:38,546 INFO [RS:2;jenkins-hbase4:39477] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 20:18:38,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0d9aae173d0dba6d69d7f47c5f9bbb35, disabling compactions & flushes 2023-07-12 20:18:38,546 INFO [RS:0;jenkins-hbase4:45413] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 20:18:38,547 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35. 2023-07-12 20:18:38,547 INFO [RS:2;jenkins-hbase4:39477] regionserver.HRegionServer(3305): Received CLOSE for 9a76e2737d01d4ca92918a15f4f819f9 2023-07-12 20:18:38,546 DEBUG [RS:1;jenkins-hbase4:39231] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1651ff90 to 127.0.0.1:52715 2023-07-12 20:18:38,547 INFO [RS:2;jenkins-hbase4:39477] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:38,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35. 2023-07-12 20:18:38,549 DEBUG [RS:2;jenkins-hbase4:39477] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x37cb1eee to 127.0.0.1:52715 2023-07-12 20:18:38,547 INFO [RS:0;jenkins-hbase4:45413] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 20:18:38,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9a76e2737d01d4ca92918a15f4f819f9, disabling compactions & flushes 2023-07-12 20:18:38,549 INFO [RS:0;jenkins-hbase4:45413] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 20:18:38,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9. 2023-07-12 20:18:38,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9. 2023-07-12 20:18:38,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9. after waiting 0 ms 2023-07-12 20:18:38,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9. 2023-07-12 20:18:38,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 9a76e2737d01d4ca92918a15f4f819f9 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-12 20:18:38,549 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 20:18:38,549 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 20:18:38,549 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 20:18:38,550 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 20:18:38,550 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 20:18:38,550 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-12 20:18:38,549 DEBUG [RS:2;jenkins-hbase4:39477] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:38,550 INFO [RS:2;jenkins-hbase4:39477] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 20:18:38,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35. after waiting 0 ms 2023-07-12 20:18:38,550 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35. 2023-07-12 20:18:38,547 DEBUG [RS:1;jenkins-hbase4:39231] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:38,550 INFO [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-12 20:18:38,550 DEBUG [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer(1478): Online Regions={0d9aae173d0dba6d69d7f47c5f9bbb35=hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35., b65ba95095d02890ccd7aa08c32b30cd=hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd.} 2023-07-12 20:18:38,550 DEBUG [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer(1504): Waiting on 0d9aae173d0dba6d69d7f47c5f9bbb35, b65ba95095d02890ccd7aa08c32b30cd 2023-07-12 20:18:38,550 DEBUG [RS:2;jenkins-hbase4:39477] regionserver.HRegionServer(1478): Online Regions={9a76e2737d01d4ca92918a15f4f819f9=hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9.} 2023-07-12 20:18:38,550 DEBUG [RS:2;jenkins-hbase4:39477] regionserver.HRegionServer(1504): Waiting on 9a76e2737d01d4ca92918a15f4f819f9 2023-07-12 20:18:38,549 INFO [RS:0;jenkins-hbase4:45413] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 20:18:38,551 DEBUG [RS:0;jenkins-hbase4:45413] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-12 20:18:38,551 DEBUG [RS:0;jenkins-hbase4:45413] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-12 20:18:38,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/quota/0d9aae173d0dba6d69d7f47c5f9bbb35/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:38,563 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35. 2023-07-12 20:18:38,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0d9aae173d0dba6d69d7f47c5f9bbb35: 2023-07-12 20:18:38,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689193117031.0d9aae173d0dba6d69d7f47c5f9bbb35. 2023-07-12 20:18:38,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b65ba95095d02890ccd7aa08c32b30cd, disabling compactions & flushes 2023-07-12 20:18:38,563 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd. 2023-07-12 20:18:38,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd. 2023-07-12 20:18:38,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd. after waiting 0 ms 2023-07-12 20:18:38,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd. 2023-07-12 20:18:38,564 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing b65ba95095d02890ccd7aa08c32b30cd 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-12 20:18:38,568 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:38,570 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:38,572 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:38,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/namespace/9a76e2737d01d4ca92918a15f4f819f9/.tmp/info/75c368ae42444140a865b725035901de 2023-07-12 20:18:38,577 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/.tmp/info/4fc83eb80c034011837ced0613209568 2023-07-12 20:18:38,585 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/rsgroup/b65ba95095d02890ccd7aa08c32b30cd/.tmp/m/20058f245e85431589d79162c61345f9 2023-07-12 20:18:38,586 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 75c368ae42444140a865b725035901de 2023-07-12 20:18:38,587 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/namespace/9a76e2737d01d4ca92918a15f4f819f9/.tmp/info/75c368ae42444140a865b725035901de as hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/namespace/9a76e2737d01d4ca92918a15f4f819f9/info/75c368ae42444140a865b725035901de 2023-07-12 20:18:38,591 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4fc83eb80c034011837ced0613209568 2023-07-12 20:18:38,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/rsgroup/b65ba95095d02890ccd7aa08c32b30cd/.tmp/m/20058f245e85431589d79162c61345f9 as hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/rsgroup/b65ba95095d02890ccd7aa08c32b30cd/m/20058f245e85431589d79162c61345f9 2023-07-12 20:18:38,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 75c368ae42444140a865b725035901de 2023-07-12 20:18:38,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/namespace/9a76e2737d01d4ca92918a15f4f819f9/info/75c368ae42444140a865b725035901de, entries=3, sequenceid=8, filesize=5.0 K 2023-07-12 20:18:38,598 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/rsgroup/b65ba95095d02890ccd7aa08c32b30cd/m/20058f245e85431589d79162c61345f9, entries=1, sequenceid=7, filesize=4.9 K 2023-07-12 20:18:38,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 9a76e2737d01d4ca92918a15f4f819f9 in 52ms, sequenceid=8, compaction requested=false 2023-07-12 20:18:38,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 20:18:38,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for b65ba95095d02890ccd7aa08c32b30cd in 40ms, sequenceid=7, compaction requested=false 2023-07-12 20:18:38,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 20:18:38,610 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/namespace/9a76e2737d01d4ca92918a15f4f819f9/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-12 20:18:38,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/rsgroup/b65ba95095d02890ccd7aa08c32b30cd/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-12 20:18:38,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9. 2023-07-12 20:18:38,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9a76e2737d01d4ca92918a15f4f819f9: 2023-07-12 20:18:38,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689193116678.9a76e2737d01d4ca92918a15f4f819f9. 2023-07-12 20:18:38,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 20:18:38,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd. 2023-07-12 20:18:38,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b65ba95095d02890ccd7aa08c32b30cd: 2023-07-12 20:18:38,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689193116717.b65ba95095d02890ccd7aa08c32b30cd. 2023-07-12 20:18:38,617 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/.tmp/rep_barrier/c97d9fe980774ea7bb58d5f07ce72424 2023-07-12 20:18:38,623 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c97d9fe980774ea7bb58d5f07ce72424 2023-07-12 20:18:38,641 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/.tmp/table/a3f6392a533c4cf9bbe4bcd9a37e12db 2023-07-12 20:18:38,646 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a3f6392a533c4cf9bbe4bcd9a37e12db 2023-07-12 20:18:38,647 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/.tmp/info/4fc83eb80c034011837ced0613209568 as hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/info/4fc83eb80c034011837ced0613209568 2023-07-12 20:18:38,654 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4fc83eb80c034011837ced0613209568 2023-07-12 20:18:38,654 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/info/4fc83eb80c034011837ced0613209568, entries=32, sequenceid=31, filesize=8.5 K 2023-07-12 20:18:38,655 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/.tmp/rep_barrier/c97d9fe980774ea7bb58d5f07ce72424 as hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/rep_barrier/c97d9fe980774ea7bb58d5f07ce72424 2023-07-12 20:18:38,660 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c97d9fe980774ea7bb58d5f07ce72424 2023-07-12 20:18:38,660 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/rep_barrier/c97d9fe980774ea7bb58d5f07ce72424, entries=1, sequenceid=31, filesize=4.9 K 2023-07-12 20:18:38,661 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/.tmp/table/a3f6392a533c4cf9bbe4bcd9a37e12db as hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/table/a3f6392a533c4cf9bbe4bcd9a37e12db 2023-07-12 20:18:38,667 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a3f6392a533c4cf9bbe4bcd9a37e12db 2023-07-12 20:18:38,667 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/table/a3f6392a533c4cf9bbe4bcd9a37e12db, entries=8, sequenceid=31, filesize=5.2 K 2023-07-12 20:18:38,668 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 118ms, sequenceid=31, compaction requested=false 2023-07-12 20:18:38,668 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-12 20:18:38,677 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-12 20:18:38,678 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 20:18:38,678 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 20:18:38,678 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 20:18:38,678 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 20:18:38,750 INFO [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39231,1689193115593; all regions closed. 2023-07-12 20:18:38,751 DEBUG [RS:1;jenkins-hbase4:39231] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 20:18:38,751 INFO [RS:2;jenkins-hbase4:39477] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39477,1689193115756; all regions closed. 2023-07-12 20:18:38,751 DEBUG [RS:2;jenkins-hbase4:39477] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 20:18:38,751 INFO [RS:0;jenkins-hbase4:45413] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45413,1689193115406; all regions closed. 2023-07-12 20:18:38,751 DEBUG [RS:0;jenkins-hbase4:45413] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 20:18:38,763 DEBUG [RS:2;jenkins-hbase4:39477] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/oldWALs 2023-07-12 20:18:38,763 INFO [RS:2;jenkins-hbase4:39477] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39477%2C1689193115756:(num 1689193116410) 2023-07-12 20:18:38,763 DEBUG [RS:2;jenkins-hbase4:39477] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:38,763 INFO [RS:2;jenkins-hbase4:39477] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:38,763 DEBUG [RS:1;jenkins-hbase4:39231] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/oldWALs 2023-07-12 20:18:38,763 INFO [RS:1;jenkins-hbase4:39231] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39231%2C1689193115593:(num 1689193116411) 2023-07-12 20:18:38,763 DEBUG [RS:0;jenkins-hbase4:45413] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/oldWALs 2023-07-12 20:18:38,763 DEBUG [RS:1;jenkins-hbase4:39231] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:38,763 INFO [RS:2;jenkins-hbase4:39477] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 20:18:38,764 INFO [RS:1;jenkins-hbase4:39231] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:38,763 INFO [RS:0;jenkins-hbase4:45413] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45413%2C1689193115406.meta:.meta(num 1689193116614) 2023-07-12 20:18:38,764 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 20:18:38,764 INFO [RS:2;jenkins-hbase4:39477] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 20:18:38,764 INFO [RS:1;jenkins-hbase4:39231] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 20:18:38,764 INFO [RS:2;jenkins-hbase4:39477] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 20:18:38,764 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 20:18:38,764 INFO [RS:2;jenkins-hbase4:39477] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 20:18:38,764 INFO [RS:1;jenkins-hbase4:39231] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 20:18:38,764 INFO [RS:1;jenkins-hbase4:39231] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 20:18:38,765 INFO [RS:1;jenkins-hbase4:39231] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 20:18:38,765 INFO [RS:2;jenkins-hbase4:39477] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39477 2023-07-12 20:18:38,768 INFO [RS:1;jenkins-hbase4:39231] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39231 2023-07-12 20:18:38,772 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:38,772 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:39231-0x1015b2ff09e0002, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:38,772 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:39477-0x1015b2ff09e0003, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:38,772 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:39231-0x1015b2ff09e0002, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:38,772 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:39477-0x1015b2ff09e0003, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:38,772 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:45413-0x1015b2ff09e0001, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39477,1689193115756 2023-07-12 20:18:38,772 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:45413-0x1015b2ff09e0001, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:38,773 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:39477-0x1015b2ff09e0003, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39231,1689193115593 2023-07-12 20:18:38,773 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:39231-0x1015b2ff09e0002, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39231,1689193115593 2023-07-12 20:18:38,773 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:45413-0x1015b2ff09e0001, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39231,1689193115593 2023-07-12 20:18:38,773 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39477,1689193115756] 2023-07-12 20:18:38,773 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39477,1689193115756; numProcessing=1 2023-07-12 20:18:38,774 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39477,1689193115756 already deleted, retry=false 2023-07-12 20:18:38,774 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39477,1689193115756 expired; onlineServers=2 2023-07-12 20:18:38,774 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39231,1689193115593] 2023-07-12 20:18:38,774 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39231,1689193115593; numProcessing=2 2023-07-12 20:18:38,781 DEBUG [RS:0;jenkins-hbase4:45413] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/oldWALs 2023-07-12 20:18:38,781 INFO [RS:0;jenkins-hbase4:45413] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45413%2C1689193115406:(num 1689193116410) 2023-07-12 20:18:38,781 DEBUG [RS:0;jenkins-hbase4:45413] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:38,781 INFO [RS:0;jenkins-hbase4:45413] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:38,781 INFO [RS:0;jenkins-hbase4:45413] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 20:18:38,781 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 20:18:38,782 INFO [RS:0;jenkins-hbase4:45413] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45413 2023-07-12 20:18:38,875 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:39231-0x1015b2ff09e0002, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:38,875 INFO [RS:1;jenkins-hbase4:39231] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39231,1689193115593; zookeeper connection closed. 2023-07-12 20:18:38,875 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:39231-0x1015b2ff09e0002, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:38,876 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4b071043] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4b071043 2023-07-12 20:18:38,877 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39231,1689193115593 already deleted, retry=false 2023-07-12 20:18:38,877 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:45413-0x1015b2ff09e0001, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45413,1689193115406 2023-07-12 20:18:38,877 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39231,1689193115593 expired; onlineServers=1 2023-07-12 20:18:38,877 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:38,878 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45413,1689193115406] 2023-07-12 20:18:38,878 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45413,1689193115406; numProcessing=3 2023-07-12 20:18:38,879 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45413,1689193115406 already deleted, retry=false 2023-07-12 20:18:38,879 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45413,1689193115406 expired; onlineServers=0 2023-07-12 20:18:38,879 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34685,1689193115094' ***** 2023-07-12 20:18:38,879 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 20:18:38,880 DEBUG [M:0;jenkins-hbase4:34685] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2394c2b1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 20:18:38,880 INFO [M:0;jenkins-hbase4:34685] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 20:18:38,881 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 20:18:38,882 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:38,882 INFO [M:0;jenkins-hbase4:34685] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@504f9b7b{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 20:18:38,882 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 20:18:38,882 INFO [M:0;jenkins-hbase4:34685] server.AbstractConnector(383): Stopped ServerConnector@713fc71e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 20:18:38,883 INFO [M:0;jenkins-hbase4:34685] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 20:18:38,883 INFO [M:0;jenkins-hbase4:34685] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@745323b2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 20:18:38,883 INFO [M:0;jenkins-hbase4:34685] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@ed4524b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/hadoop.log.dir/,STOPPED} 2023-07-12 20:18:38,884 INFO [M:0;jenkins-hbase4:34685] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34685,1689193115094 2023-07-12 20:18:38,884 INFO [M:0;jenkins-hbase4:34685] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34685,1689193115094; all regions closed. 2023-07-12 20:18:38,884 DEBUG [M:0;jenkins-hbase4:34685] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:38,884 INFO [M:0;jenkins-hbase4:34685] master.HMaster(1491): Stopping master jetty server 2023-07-12 20:18:38,885 INFO [M:0;jenkins-hbase4:34685] server.AbstractConnector(383): Stopped ServerConnector@11e3bfd8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 20:18:38,885 DEBUG [M:0;jenkins-hbase4:34685] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 20:18:38,885 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 20:18:38,885 DEBUG [M:0;jenkins-hbase4:34685] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 20:18:38,885 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689193116152] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689193116152,5,FailOnTimeoutGroup] 2023-07-12 20:18:38,886 INFO [M:0;jenkins-hbase4:34685] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 20:18:38,885 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689193116152] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689193116152,5,FailOnTimeoutGroup] 2023-07-12 20:18:38,886 INFO [M:0;jenkins-hbase4:34685] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 20:18:38,887 INFO [M:0;jenkins-hbase4:34685] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 20:18:38,887 DEBUG [M:0;jenkins-hbase4:34685] master.HMaster(1512): Stopping service threads 2023-07-12 20:18:38,887 INFO [M:0;jenkins-hbase4:34685] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 20:18:38,887 ERROR [M:0;jenkins-hbase4:34685] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-12 20:18:38,888 INFO [M:0;jenkins-hbase4:34685] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 20:18:38,888 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 20:18:38,888 DEBUG [M:0;jenkins-hbase4:34685] zookeeper.ZKUtil(398): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 20:18:38,888 WARN [M:0;jenkins-hbase4:34685] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 20:18:38,888 INFO [M:0;jenkins-hbase4:34685] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 20:18:38,889 INFO [M:0;jenkins-hbase4:34685] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 20:18:38,889 DEBUG [M:0;jenkins-hbase4:34685] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 20:18:38,889 INFO [M:0;jenkins-hbase4:34685] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:38,889 DEBUG [M:0;jenkins-hbase4:34685] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:38,889 DEBUG [M:0;jenkins-hbase4:34685] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 20:18:38,889 DEBUG [M:0;jenkins-hbase4:34685] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:38,889 INFO [M:0;jenkins-hbase4:34685] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.99 KB heapSize=109.15 KB 2023-07-12 20:18:38,903 INFO [M:0;jenkins-hbase4:34685] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.99 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4f719744f5b94bef962157758e2c384a 2023-07-12 20:18:38,909 DEBUG [M:0;jenkins-hbase4:34685] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4f719744f5b94bef962157758e2c384a as hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4f719744f5b94bef962157758e2c384a 2023-07-12 20:18:38,915 INFO [M:0;jenkins-hbase4:34685] regionserver.HStore(1080): Added hdfs://localhost:33535/user/jenkins/test-data/c678aaa0-afdd-5ba6-7e38-ff55fbaf6fc5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4f719744f5b94bef962157758e2c384a, entries=24, sequenceid=194, filesize=12.4 K 2023-07-12 20:18:38,916 INFO [M:0;jenkins-hbase4:34685] regionserver.HRegion(2948): Finished flush of dataSize ~92.99 KB/95220, heapSize ~109.13 KB/111752, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=194, compaction requested=false 2023-07-12 20:18:38,917 INFO [M:0;jenkins-hbase4:34685] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:38,917 DEBUG [M:0;jenkins-hbase4:34685] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 20:18:38,921 INFO [M:0;jenkins-hbase4:34685] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 20:18:38,921 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 20:18:38,922 INFO [M:0;jenkins-hbase4:34685] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34685 2023-07-12 20:18:38,924 DEBUG [M:0;jenkins-hbase4:34685] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,34685,1689193115094 already deleted, retry=false 2023-07-12 20:18:39,123 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:39,123 INFO [M:0;jenkins-hbase4:34685] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34685,1689193115094; zookeeper connection closed. 2023-07-12 20:18:39,123 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): master:34685-0x1015b2ff09e0000, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:39,223 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:45413-0x1015b2ff09e0001, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:39,223 INFO [RS:0;jenkins-hbase4:45413] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45413,1689193115406; zookeeper connection closed. 2023-07-12 20:18:39,223 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:45413-0x1015b2ff09e0001, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:39,224 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2c91c1fb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2c91c1fb 2023-07-12 20:18:39,323 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:39477-0x1015b2ff09e0003, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:39,323 INFO [RS:2;jenkins-hbase4:39477] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39477,1689193115756; zookeeper connection closed. 2023-07-12 20:18:39,323 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): regionserver:39477-0x1015b2ff09e0003, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:39,324 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@53f9ff30] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@53f9ff30 2023-07-12 20:18:39,324 INFO [Listener at localhost/38141] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-12 20:18:39,324 WARN [Listener at localhost/38141] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 20:18:39,328 INFO [Listener at localhost/38141] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 20:18:39,432 WARN [BP-563691915-172.31.14.131-1689193113764 heartbeating to localhost/127.0.0.1:33535] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 20:18:39,432 WARN [BP-563691915-172.31.14.131-1689193113764 heartbeating to localhost/127.0.0.1:33535] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-563691915-172.31.14.131-1689193113764 (Datanode Uuid f790184d-e7dc-4536-813a-dd9a5c163b1d) service to localhost/127.0.0.1:33535 2023-07-12 20:18:39,433 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/cluster_c38682cf-d1fc-98f1-6545-cecd75b4d94e/dfs/data/data5/current/BP-563691915-172.31.14.131-1689193113764] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:39,433 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/cluster_c38682cf-d1fc-98f1-6545-cecd75b4d94e/dfs/data/data6/current/BP-563691915-172.31.14.131-1689193113764] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:39,435 WARN [Listener at localhost/38141] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 20:18:39,439 INFO [Listener at localhost/38141] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 20:18:39,544 WARN [BP-563691915-172.31.14.131-1689193113764 heartbeating to localhost/127.0.0.1:33535] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 20:18:39,544 WARN [BP-563691915-172.31.14.131-1689193113764 heartbeating to localhost/127.0.0.1:33535] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-563691915-172.31.14.131-1689193113764 (Datanode Uuid be779b90-95d9-4dbe-b5f3-31421021f52e) service to localhost/127.0.0.1:33535 2023-07-12 20:18:39,545 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/cluster_c38682cf-d1fc-98f1-6545-cecd75b4d94e/dfs/data/data3/current/BP-563691915-172.31.14.131-1689193113764] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:39,545 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/cluster_c38682cf-d1fc-98f1-6545-cecd75b4d94e/dfs/data/data4/current/BP-563691915-172.31.14.131-1689193113764] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:39,548 WARN [Listener at localhost/38141] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 20:18:39,553 INFO [Listener at localhost/38141] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 20:18:39,657 WARN [BP-563691915-172.31.14.131-1689193113764 heartbeating to localhost/127.0.0.1:33535] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 20:18:39,657 WARN [BP-563691915-172.31.14.131-1689193113764 heartbeating to localhost/127.0.0.1:33535] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-563691915-172.31.14.131-1689193113764 (Datanode Uuid 95acbcfe-ab6d-43a9-8eec-5b1c105a926f) service to localhost/127.0.0.1:33535 2023-07-12 20:18:39,658 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/cluster_c38682cf-d1fc-98f1-6545-cecd75b4d94e/dfs/data/data1/current/BP-563691915-172.31.14.131-1689193113764] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:39,658 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/cluster_c38682cf-d1fc-98f1-6545-cecd75b4d94e/dfs/data/data2/current/BP-563691915-172.31.14.131-1689193113764] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:39,668 INFO [Listener at localhost/38141] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 20:18:39,790 INFO [Listener at localhost/38141] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 20:18:39,828 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-12 20:18:39,828 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 20:18:39,829 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/hadoop.log.dir so I do NOT create it in target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362 2023-07-12 20:18:39,829 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/56849b9c-770d-f766-1dbb-5a7fa6b05aea/hadoop.tmp.dir so I do NOT create it in target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362 2023-07-12 20:18:39,829 INFO [Listener at localhost/38141] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3, deleteOnExit=true 2023-07-12 20:18:39,829 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 20:18:39,829 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/test.cache.data in system properties and HBase conf 2023-07-12 20:18:39,829 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 20:18:39,830 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/hadoop.log.dir in system properties and HBase conf 2023-07-12 20:18:39,830 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 20:18:39,830 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 20:18:39,830 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 20:18:39,830 DEBUG [Listener at localhost/38141] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 20:18:39,830 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 20:18:39,830 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 20:18:39,831 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 20:18:39,831 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 20:18:39,831 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 20:18:39,831 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 20:18:39,831 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 20:18:39,831 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 20:18:39,831 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 20:18:39,831 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/nfs.dump.dir in system properties and HBase conf 2023-07-12 20:18:39,831 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/java.io.tmpdir in system properties and HBase conf 2023-07-12 20:18:39,832 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 20:18:39,832 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 20:18:39,832 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 20:18:39,860 WARN [Listener at localhost/38141] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 20:18:39,861 WARN [Listener at localhost/38141] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 20:18:39,887 DEBUG [Listener at localhost/38141-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1015b2ff09e000a, quorum=127.0.0.1:52715, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-12 20:18:39,887 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1015b2ff09e000a, quorum=127.0.0.1:52715, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-12 20:18:39,914 WARN [Listener at localhost/38141] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 20:18:39,916 INFO [Listener at localhost/38141] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 20:18:39,921 INFO [Listener at localhost/38141] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/java.io.tmpdir/Jetty_localhost_36741_hdfs____odz6fv/webapp 2023-07-12 20:18:40,020 INFO [Listener at localhost/38141] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36741 2023-07-12 20:18:40,025 WARN [Listener at localhost/38141] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 20:18:40,025 WARN [Listener at localhost/38141] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 20:18:40,067 WARN [Listener at localhost/34547] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 20:18:40,080 WARN [Listener at localhost/34547] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 20:18:40,082 WARN [Listener at localhost/34547] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 20:18:40,083 INFO [Listener at localhost/34547] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 20:18:40,091 INFO [Listener at localhost/34547] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/java.io.tmpdir/Jetty_localhost_42309_datanode____rdmbdx/webapp 2023-07-12 20:18:40,185 INFO [Listener at localhost/34547] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42309 2023-07-12 20:18:40,193 WARN [Listener at localhost/46429] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 20:18:40,207 WARN [Listener at localhost/46429] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 20:18:40,210 WARN [Listener at localhost/46429] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 20:18:40,211 INFO [Listener at localhost/46429] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 20:18:40,216 INFO [Listener at localhost/46429] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/java.io.tmpdir/Jetty_localhost_45427_datanode____jyod33/webapp 2023-07-12 20:18:40,290 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x46ed83468a7121ea: Processing first storage report for DS-4b03354c-de95-4316-9f8f-31dabd8277ba from datanode 23cafc6d-1524-40ce-8464-43397641102c 2023-07-12 20:18:40,290 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x46ed83468a7121ea: from storage DS-4b03354c-de95-4316-9f8f-31dabd8277ba node DatanodeRegistration(127.0.0.1:35755, datanodeUuid=23cafc6d-1524-40ce-8464-43397641102c, infoPort=39275, infoSecurePort=0, ipcPort=46429, storageInfo=lv=-57;cid=testClusterID;nsid=362669750;c=1689193119866), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 20:18:40,290 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x46ed83468a7121ea: Processing first storage report for DS-291fd708-dad4-4477-8718-2a049c797957 from datanode 23cafc6d-1524-40ce-8464-43397641102c 2023-07-12 20:18:40,290 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x46ed83468a7121ea: from storage DS-291fd708-dad4-4477-8718-2a049c797957 node DatanodeRegistration(127.0.0.1:35755, datanodeUuid=23cafc6d-1524-40ce-8464-43397641102c, infoPort=39275, infoSecurePort=0, ipcPort=46429, storageInfo=lv=-57;cid=testClusterID;nsid=362669750;c=1689193119866), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 20:18:40,342 INFO [Listener at localhost/46429] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45427 2023-07-12 20:18:40,350 WARN [Listener at localhost/44023] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 20:18:40,368 WARN [Listener at localhost/44023] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 20:18:40,370 WARN [Listener at localhost/44023] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 20:18:40,371 INFO [Listener at localhost/44023] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 20:18:40,376 INFO [Listener at localhost/44023] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/java.io.tmpdir/Jetty_localhost_35607_datanode____.6v1xfi/webapp 2023-07-12 20:18:40,440 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc74ff71cc9328e93: Processing first storage report for DS-46809008-3e07-4a08-8ec7-7e450c28f5a1 from datanode 94a7f703-54e3-41fa-88b8-2ab27578b357 2023-07-12 20:18:40,440 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc74ff71cc9328e93: from storage DS-46809008-3e07-4a08-8ec7-7e450c28f5a1 node DatanodeRegistration(127.0.0.1:33003, datanodeUuid=94a7f703-54e3-41fa-88b8-2ab27578b357, infoPort=39335, infoSecurePort=0, ipcPort=44023, storageInfo=lv=-57;cid=testClusterID;nsid=362669750;c=1689193119866), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 20:18:40,440 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc74ff71cc9328e93: Processing first storage report for DS-b3916d0c-5a61-478b-bffe-5866acd538c8 from datanode 94a7f703-54e3-41fa-88b8-2ab27578b357 2023-07-12 20:18:40,440 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc74ff71cc9328e93: from storage DS-b3916d0c-5a61-478b-bffe-5866acd538c8 node DatanodeRegistration(127.0.0.1:33003, datanodeUuid=94a7f703-54e3-41fa-88b8-2ab27578b357, infoPort=39335, infoSecurePort=0, ipcPort=44023, storageInfo=lv=-57;cid=testClusterID;nsid=362669750;c=1689193119866), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 20:18:40,482 INFO [Listener at localhost/44023] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35607 2023-07-12 20:18:40,490 WARN [Listener at localhost/33473] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 20:18:40,597 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1b38895e2faeb667: Processing first storage report for DS-919165da-60cb-4c5a-8bd8-d6703428735f from datanode eac2f004-2fd8-4962-a16e-f56ab832563c 2023-07-12 20:18:40,597 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1b38895e2faeb667: from storage DS-919165da-60cb-4c5a-8bd8-d6703428735f node DatanodeRegistration(127.0.0.1:45133, datanodeUuid=eac2f004-2fd8-4962-a16e-f56ab832563c, infoPort=35485, infoSecurePort=0, ipcPort=33473, storageInfo=lv=-57;cid=testClusterID;nsid=362669750;c=1689193119866), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 20:18:40,597 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1b38895e2faeb667: Processing first storage report for DS-a455b3f1-0dc4-4ed6-979e-2aac4c2bb67f from datanode eac2f004-2fd8-4962-a16e-f56ab832563c 2023-07-12 20:18:40,597 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1b38895e2faeb667: from storage DS-a455b3f1-0dc4-4ed6-979e-2aac4c2bb67f node DatanodeRegistration(127.0.0.1:45133, datanodeUuid=eac2f004-2fd8-4962-a16e-f56ab832563c, infoPort=35485, infoSecurePort=0, ipcPort=33473, storageInfo=lv=-57;cid=testClusterID;nsid=362669750;c=1689193119866), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 20:18:40,601 DEBUG [Listener at localhost/33473] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362 2023-07-12 20:18:40,607 INFO [Listener at localhost/33473] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/zookeeper_0, clientPort=58245, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 20:18:40,609 INFO [Listener at localhost/33473] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=58245 2023-07-12 20:18:40,609 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:40,610 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:40,642 INFO [Listener at localhost/33473] util.FSUtils(471): Created version file at hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f with version=8 2023-07-12 20:18:40,642 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:41485/user/jenkins/test-data/c3d10d1d-8983-9675-3df7-58b1043f2bb6/hbase-staging 2023-07-12 20:18:40,643 DEBUG [Listener at localhost/33473] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 20:18:40,643 DEBUG [Listener at localhost/33473] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 20:18:40,643 DEBUG [Listener at localhost/33473] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 20:18:40,643 DEBUG [Listener at localhost/33473] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 20:18:40,644 INFO [Listener at localhost/33473] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 20:18:40,644 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:40,645 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:40,645 INFO [Listener at localhost/33473] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 20:18:40,645 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:40,645 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 20:18:40,645 INFO [Listener at localhost/33473] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 20:18:40,647 INFO [Listener at localhost/33473] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40547 2023-07-12 20:18:40,647 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:40,648 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:40,649 INFO [Listener at localhost/33473] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40547 connecting to ZooKeeper ensemble=127.0.0.1:58245 2023-07-12 20:18:40,661 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:405470x0, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 20:18:40,662 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40547-0x1015b30065e0000 connected 2023-07-12 20:18:40,676 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 20:18:40,677 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:40,677 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 20:18:40,678 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40547 2023-07-12 20:18:40,678 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40547 2023-07-12 20:18:40,678 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40547 2023-07-12 20:18:40,678 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40547 2023-07-12 20:18:40,678 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40547 2023-07-12 20:18:40,680 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 20:18:40,680 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 20:18:40,680 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 20:18:40,681 INFO [Listener at localhost/33473] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 20:18:40,681 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 20:18:40,681 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 20:18:40,681 INFO [Listener at localhost/33473] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 20:18:40,682 INFO [Listener at localhost/33473] http.HttpServer(1146): Jetty bound to port 41529 2023-07-12 20:18:40,682 INFO [Listener at localhost/33473] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 20:18:40,686 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:40,686 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@63cb53b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/hadoop.log.dir/,AVAILABLE} 2023-07-12 20:18:40,687 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:40,687 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7793b15a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 20:18:40,801 INFO [Listener at localhost/33473] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 20:18:40,802 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 20:18:40,802 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 20:18:40,802 INFO [Listener at localhost/33473] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 20:18:40,803 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:40,804 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@78273dc3{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/java.io.tmpdir/jetty-0_0_0_0-41529-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7052193580403104190/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 20:18:40,805 INFO [Listener at localhost/33473] server.AbstractConnector(333): Started ServerConnector@79468c31{HTTP/1.1, (http/1.1)}{0.0.0.0:41529} 2023-07-12 20:18:40,805 INFO [Listener at localhost/33473] server.Server(415): Started @43545ms 2023-07-12 20:18:40,806 INFO [Listener at localhost/33473] master.HMaster(444): hbase.rootdir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f, hbase.cluster.distributed=false 2023-07-12 20:18:40,819 INFO [Listener at localhost/33473] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 20:18:40,819 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:40,819 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:40,819 INFO [Listener at localhost/33473] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 20:18:40,819 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:40,819 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 20:18:40,819 INFO [Listener at localhost/33473] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 20:18:40,821 INFO [Listener at localhost/33473] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46531 2023-07-12 20:18:40,821 INFO [Listener at localhost/33473] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 20:18:40,822 DEBUG [Listener at localhost/33473] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 20:18:40,822 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:40,823 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:40,824 INFO [Listener at localhost/33473] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46531 connecting to ZooKeeper ensemble=127.0.0.1:58245 2023-07-12 20:18:40,828 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:465310x0, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 20:18:40,829 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46531-0x1015b30065e0001 connected 2023-07-12 20:18:40,829 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 20:18:40,830 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:40,831 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 20:18:40,831 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46531 2023-07-12 20:18:40,833 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46531 2023-07-12 20:18:40,833 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46531 2023-07-12 20:18:40,834 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46531 2023-07-12 20:18:40,836 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46531 2023-07-12 20:18:40,838 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 20:18:40,838 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 20:18:40,838 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 20:18:40,838 INFO [Listener at localhost/33473] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 20:18:40,839 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 20:18:40,839 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 20:18:40,839 INFO [Listener at localhost/33473] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 20:18:40,839 INFO [Listener at localhost/33473] http.HttpServer(1146): Jetty bound to port 45225 2023-07-12 20:18:40,839 INFO [Listener at localhost/33473] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 20:18:40,844 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:40,844 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1e83c95c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/hadoop.log.dir/,AVAILABLE} 2023-07-12 20:18:40,844 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:40,845 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5f945bd4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 20:18:40,971 INFO [Listener at localhost/33473] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 20:18:40,972 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 20:18:40,972 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 20:18:40,973 INFO [Listener at localhost/33473] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 20:18:40,973 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:40,975 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@61547d7e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/java.io.tmpdir/jetty-0_0_0_0-45225-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1209913036405300959/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:40,976 INFO [Listener at localhost/33473] server.AbstractConnector(333): Started ServerConnector@eea127f{HTTP/1.1, (http/1.1)}{0.0.0.0:45225} 2023-07-12 20:18:40,976 INFO [Listener at localhost/33473] server.Server(415): Started @43716ms 2023-07-12 20:18:40,990 INFO [Listener at localhost/33473] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 20:18:40,990 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:40,990 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:40,990 INFO [Listener at localhost/33473] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 20:18:40,990 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:40,990 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 20:18:40,990 INFO [Listener at localhost/33473] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 20:18:40,992 INFO [Listener at localhost/33473] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38407 2023-07-12 20:18:40,992 INFO [Listener at localhost/33473] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 20:18:41,002 DEBUG [Listener at localhost/33473] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 20:18:41,003 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:41,004 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:41,005 INFO [Listener at localhost/33473] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38407 connecting to ZooKeeper ensemble=127.0.0.1:58245 2023-07-12 20:18:41,022 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:384070x0, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 20:18:41,024 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:384070x0, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 20:18:41,024 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:384070x0, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:41,025 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:384070x0, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 20:18:41,033 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38407-0x1015b30065e0002 connected 2023-07-12 20:18:41,033 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38407 2023-07-12 20:18:41,034 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38407 2023-07-12 20:18:41,034 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38407 2023-07-12 20:18:41,043 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38407 2023-07-12 20:18:41,046 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38407 2023-07-12 20:18:41,048 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 20:18:41,048 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 20:18:41,048 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 20:18:41,049 INFO [Listener at localhost/33473] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 20:18:41,049 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 20:18:41,049 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 20:18:41,049 INFO [Listener at localhost/33473] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 20:18:41,050 INFO [Listener at localhost/33473] http.HttpServer(1146): Jetty bound to port 43257 2023-07-12 20:18:41,050 INFO [Listener at localhost/33473] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 20:18:41,072 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:41,073 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@21668f73{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/hadoop.log.dir/,AVAILABLE} 2023-07-12 20:18:41,073 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:41,073 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3afce3b9{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 20:18:41,209 INFO [Listener at localhost/33473] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 20:18:41,210 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 20:18:41,211 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 20:18:41,211 INFO [Listener at localhost/33473] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 20:18:41,212 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:41,213 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4446b6b9{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/java.io.tmpdir/jetty-0_0_0_0-43257-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1275019405988485061/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:41,215 INFO [Listener at localhost/33473] server.AbstractConnector(333): Started ServerConnector@2492816c{HTTP/1.1, (http/1.1)}{0.0.0.0:43257} 2023-07-12 20:18:41,216 INFO [Listener at localhost/33473] server.Server(415): Started @43955ms 2023-07-12 20:18:41,230 INFO [Listener at localhost/33473] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 20:18:41,230 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:41,230 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:41,230 INFO [Listener at localhost/33473] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 20:18:41,230 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:41,230 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 20:18:41,230 INFO [Listener at localhost/33473] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 20:18:41,232 INFO [Listener at localhost/33473] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43827 2023-07-12 20:18:41,233 INFO [Listener at localhost/33473] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 20:18:41,247 DEBUG [Listener at localhost/33473] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 20:18:41,248 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:41,249 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:41,251 INFO [Listener at localhost/33473] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43827 connecting to ZooKeeper ensemble=127.0.0.1:58245 2023-07-12 20:18:41,255 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:438270x0, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 20:18:41,256 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:438270x0, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 20:18:41,257 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43827-0x1015b30065e0003 connected 2023-07-12 20:18:41,257 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:41,257 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 20:18:41,258 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43827 2023-07-12 20:18:41,259 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43827 2023-07-12 20:18:41,262 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43827 2023-07-12 20:18:41,263 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43827 2023-07-12 20:18:41,264 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43827 2023-07-12 20:18:41,265 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 20:18:41,266 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 20:18:41,266 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 20:18:41,266 INFO [Listener at localhost/33473] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 20:18:41,266 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 20:18:41,266 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 20:18:41,266 INFO [Listener at localhost/33473] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 20:18:41,267 INFO [Listener at localhost/33473] http.HttpServer(1146): Jetty bound to port 46293 2023-07-12 20:18:41,267 INFO [Listener at localhost/33473] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 20:18:41,274 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:41,274 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@8b7f666{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/hadoop.log.dir/,AVAILABLE} 2023-07-12 20:18:41,275 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:41,275 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@379a17d4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 20:18:41,410 INFO [Listener at localhost/33473] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 20:18:41,411 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 20:18:41,411 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 20:18:41,411 INFO [Listener at localhost/33473] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 20:18:41,412 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:41,412 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@688fd12e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/java.io.tmpdir/jetty-0_0_0_0-46293-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2615962643765499583/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:41,414 INFO [Listener at localhost/33473] server.AbstractConnector(333): Started ServerConnector@71a4cb2f{HTTP/1.1, (http/1.1)}{0.0.0.0:46293} 2023-07-12 20:18:41,414 INFO [Listener at localhost/33473] server.Server(415): Started @44154ms 2023-07-12 20:18:41,416 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 20:18:41,420 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@1abdfc99{HTTP/1.1, (http/1.1)}{0.0.0.0:44627} 2023-07-12 20:18:41,420 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @44159ms 2023-07-12 20:18:41,420 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,40547,1689193120644 2023-07-12 20:18:41,421 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 20:18:41,422 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,40547,1689193120644 2023-07-12 20:18:41,424 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 20:18:41,424 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 20:18:41,424 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 20:18:41,424 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 20:18:41,427 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:41,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 20:18:41,429 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,40547,1689193120644 from backup master directory 2023-07-12 20:18:41,429 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 20:18:41,430 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,40547,1689193120644 2023-07-12 20:18:41,430 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 20:18:41,431 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 20:18:41,431 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,40547,1689193120644 2023-07-12 20:18:41,452 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/hbase.id with ID: 30d670ec-3811-47ae-a0f9-348192307d80 2023-07-12 20:18:41,465 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:41,468 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:41,481 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2695aa84 to 127.0.0.1:58245 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:41,488 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5e682935, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:41,488 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:41,489 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 20:18:41,489 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 20:18:41,490 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/MasterData/data/master/store-tmp 2023-07-12 20:18:41,500 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:41,500 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 20:18:41,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:41,500 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:41,500 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 20:18:41,500 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:41,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:41,500 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 20:18:41,501 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/MasterData/WALs/jenkins-hbase4.apache.org,40547,1689193120644 2023-07-12 20:18:41,504 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40547%2C1689193120644, suffix=, logDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/MasterData/WALs/jenkins-hbase4.apache.org,40547,1689193120644, archiveDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/MasterData/oldWALs, maxLogs=10 2023-07-12 20:18:41,523 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-46809008-3e07-4a08-8ec7-7e450c28f5a1,DISK] 2023-07-12 20:18:41,538 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45133,DS-919165da-60cb-4c5a-8bd8-d6703428735f,DISK] 2023-07-12 20:18:41,539 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35755,DS-4b03354c-de95-4316-9f8f-31dabd8277ba,DISK] 2023-07-12 20:18:41,541 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/MasterData/WALs/jenkins-hbase4.apache.org,40547,1689193120644/jenkins-hbase4.apache.org%2C40547%2C1689193120644.1689193121504 2023-07-12 20:18:41,542 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33003,DS-46809008-3e07-4a08-8ec7-7e450c28f5a1,DISK], DatanodeInfoWithStorage[127.0.0.1:35755,DS-4b03354c-de95-4316-9f8f-31dabd8277ba,DISK], DatanodeInfoWithStorage[127.0.0.1:45133,DS-919165da-60cb-4c5a-8bd8-d6703428735f,DISK]] 2023-07-12 20:18:41,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:41,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:41,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:41,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:41,545 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:41,547 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 20:18:41,547 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 20:18:41,548 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:41,550 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:41,550 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:41,556 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 20:18:41,576 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:41,577 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9924368800, jitterRate=-0.0757211297750473}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:41,578 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 20:18:41,578 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 20:18:41,582 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 20:18:41,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 20:18:41,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 20:18:41,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-12 20:18:41,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-12 20:18:41,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 20:18:41,584 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 20:18:41,585 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 20:18:41,586 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 20:18:41,586 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 20:18:41,587 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 20:18:41,589 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:41,589 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 20:18:41,590 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 20:18:41,591 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 20:18:41,592 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:41,592 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:41,592 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:41,592 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:41,592 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:41,592 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,40547,1689193120644, sessionid=0x1015b30065e0000, setting cluster-up flag (Was=false) 2023-07-12 20:18:41,597 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:41,601 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 20:18:41,602 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40547,1689193120644 2023-07-12 20:18:41,606 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:41,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 20:18:41,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40547,1689193120644 2023-07-12 20:18:41,611 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.hbase-snapshot/.tmp 2023-07-12 20:18:41,612 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 20:18:41,612 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 20:18:41,613 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 20:18:41,613 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40547,1689193120644] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 20:18:41,613 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 20:18:41,614 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 20:18:41,621 INFO [RS:2;jenkins-hbase4:43827] regionserver.HRegionServer(951): ClusterId : 30d670ec-3811-47ae-a0f9-348192307d80 2023-07-12 20:18:41,621 INFO [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(951): ClusterId : 30d670ec-3811-47ae-a0f9-348192307d80 2023-07-12 20:18:41,622 DEBUG [RS:2;jenkins-hbase4:43827] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 20:18:41,625 DEBUG [RS:0;jenkins-hbase4:46531] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 20:18:41,621 INFO [RS:1;jenkins-hbase4:38407] regionserver.HRegionServer(951): ClusterId : 30d670ec-3811-47ae-a0f9-348192307d80 2023-07-12 20:18:41,625 DEBUG [RS:1;jenkins-hbase4:38407] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 20:18:41,628 DEBUG [RS:0;jenkins-hbase4:46531] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 20:18:41,628 DEBUG [RS:0;jenkins-hbase4:46531] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 20:18:41,628 DEBUG [RS:2;jenkins-hbase4:43827] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 20:18:41,628 DEBUG [RS:2;jenkins-hbase4:43827] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 20:18:41,628 DEBUG [RS:1;jenkins-hbase4:38407] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 20:18:41,628 DEBUG [RS:1;jenkins-hbase4:38407] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 20:18:41,630 DEBUG [RS:0;jenkins-hbase4:46531] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 20:18:41,631 DEBUG [RS:1;jenkins-hbase4:38407] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 20:18:41,631 DEBUG [RS:2;jenkins-hbase4:43827] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 20:18:41,632 DEBUG [RS:0;jenkins-hbase4:46531] zookeeper.ReadOnlyZKClient(139): Connect 0x53d004a6 to 127.0.0.1:58245 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:41,634 DEBUG [RS:1;jenkins-hbase4:38407] zookeeper.ReadOnlyZKClient(139): Connect 0x22d656d3 to 127.0.0.1:58245 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:41,634 DEBUG [RS:2;jenkins-hbase4:43827] zookeeper.ReadOnlyZKClient(139): Connect 0x74c48c7f to 127.0.0.1:58245 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:41,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 20:18:41,644 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 20:18:41,644 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 20:18:41,644 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 20:18:41,644 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 20:18:41,644 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 20:18:41,644 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 20:18:41,644 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-12 20:18:41,644 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-12 20:18:41,644 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,644 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 20:18:41,644 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,652 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689193151652 2023-07-12 20:18:41,653 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 20:18:41,653 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 20:18:41,653 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 20:18:41,657 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 20:18:41,657 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 20:18:41,657 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 20:18:41,658 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 20:18:41,659 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 20:18:41,660 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 20:18:41,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 20:18:41,660 DEBUG [RS:1;jenkins-hbase4:38407] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@67d4cced, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:41,660 DEBUG [RS:2;jenkins-hbase4:43827] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31c60c83, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:41,660 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 20:18:41,660 DEBUG [RS:1;jenkins-hbase4:38407] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3e4e3691, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 20:18:41,660 DEBUG [RS:2;jenkins-hbase4:43827] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@28db912b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 20:18:41,660 DEBUG [RS:0;jenkins-hbase4:46531] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5d17f0ac, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:41,660 DEBUG [RS:0;jenkins-hbase4:46531] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@59286fd6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 20:18:41,661 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:41,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 20:18:41,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 20:18:41,667 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689193121667,5,FailOnTimeoutGroup] 2023-07-12 20:18:41,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689193121668,5,FailOnTimeoutGroup] 2023-07-12 20:18:41,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 20:18:41,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,675 DEBUG [RS:0;jenkins-hbase4:46531] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46531 2023-07-12 20:18:41,675 INFO [RS:0;jenkins-hbase4:46531] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 20:18:41,675 INFO [RS:0;jenkins-hbase4:46531] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 20:18:41,675 DEBUG [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 20:18:41,676 INFO [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40547,1689193120644 with isa=jenkins-hbase4.apache.org/172.31.14.131:46531, startcode=1689193120819 2023-07-12 20:18:41,676 DEBUG [RS:0;jenkins-hbase4:46531] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 20:18:41,677 DEBUG [RS:2;jenkins-hbase4:43827] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:43827 2023-07-12 20:18:41,677 INFO [RS:2;jenkins-hbase4:43827] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 20:18:41,677 INFO [RS:2;jenkins-hbase4:43827] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 20:18:41,677 DEBUG [RS:2;jenkins-hbase4:43827] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 20:18:41,678 INFO [RS:2;jenkins-hbase4:43827] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40547,1689193120644 with isa=jenkins-hbase4.apache.org/172.31.14.131:43827, startcode=1689193121229 2023-07-12 20:18:41,678 DEBUG [RS:2;jenkins-hbase4:43827] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 20:18:41,680 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39683, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 20:18:41,680 DEBUG [RS:1;jenkins-hbase4:38407] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:38407 2023-07-12 20:18:41,680 INFO [RS:1;jenkins-hbase4:38407] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 20:18:41,680 INFO [RS:1;jenkins-hbase4:38407] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 20:18:41,680 DEBUG [RS:1;jenkins-hbase4:38407] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 20:18:41,682 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40547] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:41,682 INFO [RS:1;jenkins-hbase4:38407] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40547,1689193120644 with isa=jenkins-hbase4.apache.org/172.31.14.131:38407, startcode=1689193120989 2023-07-12 20:18:41,682 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40547,1689193120644] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 20:18:41,682 DEBUG [RS:1;jenkins-hbase4:38407] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 20:18:41,682 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46823, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 20:18:41,683 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40547,1689193120644] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 20:18:41,683 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40547] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43827,1689193121229 2023-07-12 20:18:41,683 DEBUG [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f 2023-07-12 20:18:41,683 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40547,1689193120644] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 20:18:41,683 DEBUG [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34547 2023-07-12 20:18:41,683 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40547,1689193120644] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 20:18:41,684 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46067, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 20:18:41,684 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40547] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38407,1689193120989 2023-07-12 20:18:41,684 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40547,1689193120644] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 20:18:41,684 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40547,1689193120644] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 20:18:41,683 DEBUG [RS:2;jenkins-hbase4:43827] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f 2023-07-12 20:18:41,684 DEBUG [RS:1;jenkins-hbase4:38407] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f 2023-07-12 20:18:41,684 DEBUG [RS:2;jenkins-hbase4:43827] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34547 2023-07-12 20:18:41,684 DEBUG [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41529 2023-07-12 20:18:41,684 DEBUG [RS:2;jenkins-hbase4:43827] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41529 2023-07-12 20:18:41,684 DEBUG [RS:1;jenkins-hbase4:38407] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34547 2023-07-12 20:18:41,685 DEBUG [RS:1;jenkins-hbase4:38407] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41529 2023-07-12 20:18:41,688 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:41,688 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:41,688 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f 2023-07-12 20:18:41,691 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:41,692 DEBUG [RS:0;jenkins-hbase4:46531] zookeeper.ZKUtil(162): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:41,692 DEBUG [RS:1;jenkins-hbase4:38407] zookeeper.ZKUtil(162): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38407,1689193120989 2023-07-12 20:18:41,692 WARN [RS:0;jenkins-hbase4:46531] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 20:18:41,692 WARN [RS:1;jenkins-hbase4:38407] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 20:18:41,692 INFO [RS:0;jenkins-hbase4:46531] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 20:18:41,692 INFO [RS:1;jenkins-hbase4:38407] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 20:18:41,692 DEBUG [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/WALs/jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:41,692 DEBUG [RS:1;jenkins-hbase4:38407] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/WALs/jenkins-hbase4.apache.org,38407,1689193120989 2023-07-12 20:18:41,692 DEBUG [RS:2;jenkins-hbase4:43827] zookeeper.ZKUtil(162): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43827,1689193121229 2023-07-12 20:18:41,692 WARN [RS:2;jenkins-hbase4:43827] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 20:18:41,692 INFO [RS:2;jenkins-hbase4:43827] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 20:18:41,692 DEBUG [RS:2;jenkins-hbase4:43827] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/WALs/jenkins-hbase4.apache.org,43827,1689193121229 2023-07-12 20:18:41,703 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43827,1689193121229] 2023-07-12 20:18:41,704 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46531,1689193120819] 2023-07-12 20:18:41,704 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38407,1689193120989] 2023-07-12 20:18:41,708 DEBUG [RS:2;jenkins-hbase4:43827] zookeeper.ZKUtil(162): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43827,1689193121229 2023-07-12 20:18:41,708 DEBUG [RS:2;jenkins-hbase4:43827] zookeeper.ZKUtil(162): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38407,1689193120989 2023-07-12 20:18:41,708 DEBUG [RS:0;jenkins-hbase4:46531] zookeeper.ZKUtil(162): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43827,1689193121229 2023-07-12 20:18:41,708 DEBUG [RS:1;jenkins-hbase4:38407] zookeeper.ZKUtil(162): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43827,1689193121229 2023-07-12 20:18:41,709 DEBUG [RS:2;jenkins-hbase4:43827] zookeeper.ZKUtil(162): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:41,709 DEBUG [RS:0;jenkins-hbase4:46531] zookeeper.ZKUtil(162): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38407,1689193120989 2023-07-12 20:18:41,709 DEBUG [RS:1;jenkins-hbase4:38407] zookeeper.ZKUtil(162): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38407,1689193120989 2023-07-12 20:18:41,709 DEBUG [RS:0;jenkins-hbase4:46531] zookeeper.ZKUtil(162): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:41,709 DEBUG [RS:1;jenkins-hbase4:38407] zookeeper.ZKUtil(162): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:41,710 DEBUG [RS:2;jenkins-hbase4:43827] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 20:18:41,710 INFO [RS:2;jenkins-hbase4:43827] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 20:18:41,710 DEBUG [RS:0;jenkins-hbase4:46531] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 20:18:41,711 DEBUG [RS:1;jenkins-hbase4:38407] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 20:18:41,711 INFO [RS:1;jenkins-hbase4:38407] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 20:18:41,711 INFO [RS:0;jenkins-hbase4:46531] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 20:18:41,711 INFO [RS:2;jenkins-hbase4:43827] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 20:18:41,712 INFO [RS:2;jenkins-hbase4:43827] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 20:18:41,712 INFO [RS:2;jenkins-hbase4:43827] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,712 INFO [RS:2;jenkins-hbase4:43827] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 20:18:41,713 INFO [RS:1;jenkins-hbase4:38407] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 20:18:41,713 INFO [RS:1;jenkins-hbase4:38407] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 20:18:41,713 INFO [RS:1;jenkins-hbase4:38407] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,713 INFO [RS:1;jenkins-hbase4:38407] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 20:18:41,713 INFO [RS:2;jenkins-hbase4:43827] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,714 INFO [RS:0;jenkins-hbase4:46531] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 20:18:41,715 DEBUG [RS:2;jenkins-hbase4:43827] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,715 DEBUG [RS:2;jenkins-hbase4:43827] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,715 DEBUG [RS:2;jenkins-hbase4:43827] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,715 INFO [RS:0;jenkins-hbase4:46531] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 20:18:41,715 DEBUG [RS:2;jenkins-hbase4:43827] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,715 INFO [RS:1;jenkins-hbase4:38407] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,715 DEBUG [RS:2;jenkins-hbase4:43827] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,715 INFO [RS:0;jenkins-hbase4:46531] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,715 DEBUG [RS:2;jenkins-hbase4:43827] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 20:18:41,715 DEBUG [RS:1;jenkins-hbase4:38407] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,716 DEBUG [RS:2;jenkins-hbase4:43827] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,716 DEBUG [RS:1;jenkins-hbase4:38407] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,716 DEBUG [RS:2;jenkins-hbase4:43827] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,716 DEBUG [RS:1;jenkins-hbase4:38407] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,716 DEBUG [RS:2;jenkins-hbase4:43827] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,716 DEBUG [RS:1;jenkins-hbase4:38407] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,716 DEBUG [RS:2;jenkins-hbase4:43827] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,716 DEBUG [RS:1;jenkins-hbase4:38407] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,716 DEBUG [RS:1;jenkins-hbase4:38407] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 20:18:41,717 DEBUG [RS:1;jenkins-hbase4:38407] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,717 DEBUG [RS:1;jenkins-hbase4:38407] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,717 DEBUG [RS:1;jenkins-hbase4:38407] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,717 DEBUG [RS:1;jenkins-hbase4:38407] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,718 INFO [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 20:18:41,731 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:41,732 INFO [RS:2;jenkins-hbase4:43827] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,732 INFO [RS:2;jenkins-hbase4:43827] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,732 INFO [RS:2;jenkins-hbase4:43827] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,732 INFO [RS:1;jenkins-hbase4:38407] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,732 INFO [RS:1;jenkins-hbase4:38407] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,733 INFO [RS:1;jenkins-hbase4:38407] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,733 INFO [RS:0;jenkins-hbase4:46531] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,733 DEBUG [RS:0;jenkins-hbase4:46531] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,733 DEBUG [RS:0;jenkins-hbase4:46531] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,733 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 20:18:41,734 DEBUG [RS:0;jenkins-hbase4:46531] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,735 DEBUG [RS:0;jenkins-hbase4:46531] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,735 DEBUG [RS:0;jenkins-hbase4:46531] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,735 DEBUG [RS:0;jenkins-hbase4:46531] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 20:18:41,735 DEBUG [RS:0;jenkins-hbase4:46531] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,735 DEBUG [RS:0;jenkins-hbase4:46531] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,735 DEBUG [RS:0;jenkins-hbase4:46531] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,735 DEBUG [RS:0;jenkins-hbase4:46531] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:41,736 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/info 2023-07-12 20:18:41,736 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 20:18:41,737 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:41,737 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 20:18:41,739 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/rep_barrier 2023-07-12 20:18:41,739 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 20:18:41,744 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:41,744 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 20:18:41,746 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/table 2023-07-12 20:18:41,746 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 20:18:41,747 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:41,748 INFO [RS:2;jenkins-hbase4:43827] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 20:18:41,748 INFO [RS:2;jenkins-hbase4:43827] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43827,1689193121229-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,750 INFO [RS:0;jenkins-hbase4:46531] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,752 INFO [RS:1;jenkins-hbase4:38407] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 20:18:41,752 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740 2023-07-12 20:18:41,754 INFO [RS:1;jenkins-hbase4:38407] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38407,1689193120989-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,754 INFO [RS:0;jenkins-hbase4:46531] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,754 INFO [RS:0;jenkins-hbase4:46531] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,754 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740 2023-07-12 20:18:41,757 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 20:18:41,758 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 20:18:41,763 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:41,764 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12005902080, jitterRate=0.11813676357269287}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 20:18:41,764 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 20:18:41,764 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 20:18:41,764 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 20:18:41,764 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 20:18:41,764 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 20:18:41,764 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 20:18:41,767 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 20:18:41,767 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 20:18:41,770 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 20:18:41,770 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 20:18:41,771 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 20:18:41,771 INFO [RS:1;jenkins-hbase4:38407] regionserver.Replication(203): jenkins-hbase4.apache.org,38407,1689193120989 started 2023-07-12 20:18:41,771 INFO [RS:1;jenkins-hbase4:38407] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38407,1689193120989, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38407, sessionid=0x1015b30065e0002 2023-07-12 20:18:41,772 INFO [RS:0;jenkins-hbase4:46531] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 20:18:41,772 INFO [RS:0;jenkins-hbase4:46531] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46531,1689193120819-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:41,772 INFO [RS:2;jenkins-hbase4:43827] regionserver.Replication(203): jenkins-hbase4.apache.org,43827,1689193121229 started 2023-07-12 20:18:41,772 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 20:18:41,773 INFO [RS:2;jenkins-hbase4:43827] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43827,1689193121229, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43827, sessionid=0x1015b30065e0003 2023-07-12 20:18:41,778 DEBUG [RS:1;jenkins-hbase4:38407] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 20:18:41,780 DEBUG [RS:1;jenkins-hbase4:38407] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38407,1689193120989 2023-07-12 20:18:41,780 DEBUG [RS:1;jenkins-hbase4:38407] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38407,1689193120989' 2023-07-12 20:18:41,780 DEBUG [RS:1;jenkins-hbase4:38407] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 20:18:41,779 DEBUG [RS:2;jenkins-hbase4:43827] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 20:18:41,780 DEBUG [RS:2;jenkins-hbase4:43827] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43827,1689193121229 2023-07-12 20:18:41,780 DEBUG [RS:2;jenkins-hbase4:43827] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43827,1689193121229' 2023-07-12 20:18:41,780 DEBUG [RS:2;jenkins-hbase4:43827] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 20:18:41,780 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 20:18:41,780 DEBUG [RS:2;jenkins-hbase4:43827] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 20:18:41,781 DEBUG [RS:1;jenkins-hbase4:38407] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 20:18:41,781 DEBUG [RS:2;jenkins-hbase4:43827] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 20:18:41,781 DEBUG [RS:2;jenkins-hbase4:43827] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 20:18:41,781 DEBUG [RS:1;jenkins-hbase4:38407] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 20:18:41,781 DEBUG [RS:1;jenkins-hbase4:38407] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 20:18:41,781 DEBUG [RS:2;jenkins-hbase4:43827] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43827,1689193121229 2023-07-12 20:18:41,781 DEBUG [RS:2;jenkins-hbase4:43827] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43827,1689193121229' 2023-07-12 20:18:41,781 DEBUG [RS:2;jenkins-hbase4:43827] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 20:18:41,781 DEBUG [RS:1;jenkins-hbase4:38407] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38407,1689193120989 2023-07-12 20:18:41,781 DEBUG [RS:1;jenkins-hbase4:38407] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38407,1689193120989' 2023-07-12 20:18:41,782 DEBUG [RS:1;jenkins-hbase4:38407] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 20:18:41,782 DEBUG [RS:2;jenkins-hbase4:43827] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 20:18:41,782 DEBUG [RS:1;jenkins-hbase4:38407] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 20:18:41,782 DEBUG [RS:2;jenkins-hbase4:43827] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 20:18:41,782 INFO [RS:2;jenkins-hbase4:43827] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 20:18:41,782 INFO [RS:2;jenkins-hbase4:43827] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 20:18:41,782 DEBUG [RS:1;jenkins-hbase4:38407] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 20:18:41,783 INFO [RS:1;jenkins-hbase4:38407] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 20:18:41,783 INFO [RS:1;jenkins-hbase4:38407] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 20:18:41,790 INFO [RS:0;jenkins-hbase4:46531] regionserver.Replication(203): jenkins-hbase4.apache.org,46531,1689193120819 started 2023-07-12 20:18:41,791 INFO [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46531,1689193120819, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46531, sessionid=0x1015b30065e0001 2023-07-12 20:18:41,791 DEBUG [RS:0;jenkins-hbase4:46531] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 20:18:41,791 DEBUG [RS:0;jenkins-hbase4:46531] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:41,791 DEBUG [RS:0;jenkins-hbase4:46531] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46531,1689193120819' 2023-07-12 20:18:41,791 DEBUG [RS:0;jenkins-hbase4:46531] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 20:18:41,791 DEBUG [RS:0;jenkins-hbase4:46531] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 20:18:41,791 DEBUG [RS:0;jenkins-hbase4:46531] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 20:18:41,791 DEBUG [RS:0;jenkins-hbase4:46531] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 20:18:41,791 DEBUG [RS:0;jenkins-hbase4:46531] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:41,791 DEBUG [RS:0;jenkins-hbase4:46531] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46531,1689193120819' 2023-07-12 20:18:41,792 DEBUG [RS:0;jenkins-hbase4:46531] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 20:18:41,792 DEBUG [RS:0;jenkins-hbase4:46531] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 20:18:41,792 DEBUG [RS:0;jenkins-hbase4:46531] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 20:18:41,792 INFO [RS:0;jenkins-hbase4:46531] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 20:18:41,792 INFO [RS:0;jenkins-hbase4:46531] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 20:18:41,884 INFO [RS:1;jenkins-hbase4:38407] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38407%2C1689193120989, suffix=, logDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/WALs/jenkins-hbase4.apache.org,38407,1689193120989, archiveDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/oldWALs, maxLogs=32 2023-07-12 20:18:41,884 INFO [RS:2;jenkins-hbase4:43827] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43827%2C1689193121229, suffix=, logDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/WALs/jenkins-hbase4.apache.org,43827,1689193121229, archiveDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/oldWALs, maxLogs=32 2023-07-12 20:18:41,894 INFO [RS:0;jenkins-hbase4:46531] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46531%2C1689193120819, suffix=, logDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/WALs/jenkins-hbase4.apache.org,46531,1689193120819, archiveDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/oldWALs, maxLogs=32 2023-07-12 20:18:41,903 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45133,DS-919165da-60cb-4c5a-8bd8-d6703428735f,DISK] 2023-07-12 20:18:41,904 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35755,DS-4b03354c-de95-4316-9f8f-31dabd8277ba,DISK] 2023-07-12 20:18:41,904 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-46809008-3e07-4a08-8ec7-7e450c28f5a1,DISK] 2023-07-12 20:18:41,915 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-46809008-3e07-4a08-8ec7-7e450c28f5a1,DISK] 2023-07-12 20:18:41,915 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45133,DS-919165da-60cb-4c5a-8bd8-d6703428735f,DISK] 2023-07-12 20:18:41,915 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35755,DS-4b03354c-de95-4316-9f8f-31dabd8277ba,DISK] 2023-07-12 20:18:41,920 INFO [RS:1;jenkins-hbase4:38407] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/WALs/jenkins-hbase4.apache.org,38407,1689193120989/jenkins-hbase4.apache.org%2C38407%2C1689193120989.1689193121885 2023-07-12 20:18:41,924 INFO [RS:2;jenkins-hbase4:43827] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/WALs/jenkins-hbase4.apache.org,43827,1689193121229/jenkins-hbase4.apache.org%2C43827%2C1689193121229.1689193121885 2023-07-12 20:18:41,924 DEBUG [RS:1;jenkins-hbase4:38407] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45133,DS-919165da-60cb-4c5a-8bd8-d6703428735f,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-46809008-3e07-4a08-8ec7-7e450c28f5a1,DISK], DatanodeInfoWithStorage[127.0.0.1:35755,DS-4b03354c-de95-4316-9f8f-31dabd8277ba,DISK]] 2023-07-12 20:18:41,924 DEBUG [RS:2;jenkins-hbase4:43827] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45133,DS-919165da-60cb-4c5a-8bd8-d6703428735f,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-46809008-3e07-4a08-8ec7-7e450c28f5a1,DISK], DatanodeInfoWithStorage[127.0.0.1:35755,DS-4b03354c-de95-4316-9f8f-31dabd8277ba,DISK]] 2023-07-12 20:18:41,930 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-46809008-3e07-4a08-8ec7-7e450c28f5a1,DISK] 2023-07-12 20:18:41,930 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35755,DS-4b03354c-de95-4316-9f8f-31dabd8277ba,DISK] 2023-07-12 20:18:41,930 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45133,DS-919165da-60cb-4c5a-8bd8-d6703428735f,DISK] 2023-07-12 20:18:41,931 DEBUG [jenkins-hbase4:40547] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 20:18:41,931 DEBUG [jenkins-hbase4:40547] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:41,931 DEBUG [jenkins-hbase4:40547] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:41,931 DEBUG [jenkins-hbase4:40547] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:41,931 DEBUG [jenkins-hbase4:40547] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:41,932 DEBUG [jenkins-hbase4:40547] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:41,935 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46531,1689193120819, state=OPENING 2023-07-12 20:18:41,937 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 20:18:41,938 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:41,938 INFO [RS:0;jenkins-hbase4:46531] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/WALs/jenkins-hbase4.apache.org,46531,1689193120819/jenkins-hbase4.apache.org%2C46531%2C1689193120819.1689193121894 2023-07-12 20:18:41,938 DEBUG [RS:0;jenkins-hbase4:46531] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33003,DS-46809008-3e07-4a08-8ec7-7e450c28f5a1,DISK], DatanodeInfoWithStorage[127.0.0.1:35755,DS-4b03354c-de95-4316-9f8f-31dabd8277ba,DISK], DatanodeInfoWithStorage[127.0.0.1:45133,DS-919165da-60cb-4c5a-8bd8-d6703428735f,DISK]] 2023-07-12 20:18:41,939 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 20:18:41,942 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46531,1689193120819}] 2023-07-12 20:18:42,097 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:42,097 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 20:18:42,099 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42462, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 20:18:42,103 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 20:18:42,103 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 20:18:42,105 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46531%2C1689193120819.meta, suffix=.meta, logDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/WALs/jenkins-hbase4.apache.org,46531,1689193120819, archiveDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/oldWALs, maxLogs=32 2023-07-12 20:18:42,119 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35755,DS-4b03354c-de95-4316-9f8f-31dabd8277ba,DISK] 2023-07-12 20:18:42,119 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45133,DS-919165da-60cb-4c5a-8bd8-d6703428735f,DISK] 2023-07-12 20:18:42,119 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-46809008-3e07-4a08-8ec7-7e450c28f5a1,DISK] 2023-07-12 20:18:42,122 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/WALs/jenkins-hbase4.apache.org,46531,1689193120819/jenkins-hbase4.apache.org%2C46531%2C1689193120819.meta.1689193122105.meta 2023-07-12 20:18:42,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45133,DS-919165da-60cb-4c5a-8bd8-d6703428735f,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-46809008-3e07-4a08-8ec7-7e450c28f5a1,DISK], DatanodeInfoWithStorage[127.0.0.1:35755,DS-4b03354c-de95-4316-9f8f-31dabd8277ba,DISK]] 2023-07-12 20:18:42,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:42,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 20:18:42,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 20:18:42,123 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 20:18:42,124 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 20:18:42,124 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:42,124 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 20:18:42,124 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 20:18:42,125 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 20:18:42,126 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/info 2023-07-12 20:18:42,126 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/info 2023-07-12 20:18:42,126 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 20:18:42,127 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:42,127 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 20:18:42,128 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/rep_barrier 2023-07-12 20:18:42,128 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/rep_barrier 2023-07-12 20:18:42,128 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 20:18:42,129 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:42,129 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 20:18:42,130 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/table 2023-07-12 20:18:42,130 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/table 2023-07-12 20:18:42,130 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 20:18:42,131 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:42,131 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740 2023-07-12 20:18:42,133 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740 2023-07-12 20:18:42,135 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 20:18:42,136 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 20:18:42,137 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11501198720, jitterRate=0.0711326003074646}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 20:18:42,137 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 20:18:42,138 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689193122097 2023-07-12 20:18:42,144 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 20:18:42,145 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 20:18:42,145 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46531,1689193120819, state=OPEN 2023-07-12 20:18:42,147 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 20:18:42,147 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 20:18:42,148 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 20:18:42,148 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46531,1689193120819 in 205 msec 2023-07-12 20:18:42,150 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 20:18:42,150 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 377 msec 2023-07-12 20:18:42,151 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 537 msec 2023-07-12 20:18:42,151 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689193122151, completionTime=-1 2023-07-12 20:18:42,152 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 20:18:42,152 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 20:18:42,155 DEBUG [hconnection-0x49393020-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 20:18:42,156 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42478, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 20:18:42,157 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 20:18:42,158 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689193182157 2023-07-12 20:18:42,158 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689193242158 2023-07-12 20:18:42,158 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-07-12 20:18:42,158 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-12 20:18:42,166 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40547,1689193120644-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:42,166 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40547,1689193120644-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:42,166 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40547,1689193120644-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:42,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:40547, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:42,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:42,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 20:18:42,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:42,170 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 20:18:42,174 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 20:18:42,174 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:42,177 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 20:18:42,180 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp/data/hbase/namespace/e4a29c7853f4c649e9db75dd1eab3fe5 2023-07-12 20:18:42,183 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp/data/hbase/namespace/e4a29c7853f4c649e9db75dd1eab3fe5 empty. 2023-07-12 20:18:42,186 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp/data/hbase/namespace/e4a29c7853f4c649e9db75dd1eab3fe5 2023-07-12 20:18:42,186 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 20:18:42,222 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:42,227 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => e4a29c7853f4c649e9db75dd1eab3fe5, NAME => 'hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp 2023-07-12 20:18:42,237 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40547,1689193120644] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:42,241 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40547,1689193120644] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 20:18:42,248 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:42,249 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 20:18:42,251 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp/data/hbase/rsgroup/bac5381289dc4350cf863d49cca42390 2023-07-12 20:18:42,252 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp/data/hbase/rsgroup/bac5381289dc4350cf863d49cca42390 empty. 2023-07-12 20:18:42,252 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp/data/hbase/rsgroup/bac5381289dc4350cf863d49cca42390 2023-07-12 20:18:42,252 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 20:18:42,254 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:42,254 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing e4a29c7853f4c649e9db75dd1eab3fe5, disabling compactions & flushes 2023-07-12 20:18:42,255 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5. 2023-07-12 20:18:42,255 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5. 2023-07-12 20:18:42,255 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5. after waiting 0 ms 2023-07-12 20:18:42,255 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5. 2023-07-12 20:18:42,255 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5. 2023-07-12 20:18:42,255 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for e4a29c7853f4c649e9db75dd1eab3fe5: 2023-07-12 20:18:42,260 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 20:18:42,262 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689193122261"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193122261"}]},"ts":"1689193122261"} 2023-07-12 20:18:42,265 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 20:18:42,266 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 20:18:42,266 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193122266"}]},"ts":"1689193122266"} 2023-07-12 20:18:42,267 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 20:18:42,271 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:42,271 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:42,271 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:42,271 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:42,271 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:42,272 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e4a29c7853f4c649e9db75dd1eab3fe5, ASSIGN}] 2023-07-12 20:18:42,272 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e4a29c7853f4c649e9db75dd1eab3fe5, ASSIGN 2023-07-12 20:18:42,273 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e4a29c7853f4c649e9db75dd1eab3fe5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46531,1689193120819; forceNewPlan=false, retain=false 2023-07-12 20:18:42,277 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:42,279 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => bac5381289dc4350cf863d49cca42390, NAME => 'hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp 2023-07-12 20:18:42,294 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:42,294 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing bac5381289dc4350cf863d49cca42390, disabling compactions & flushes 2023-07-12 20:18:42,294 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390. 2023-07-12 20:18:42,294 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390. 2023-07-12 20:18:42,294 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390. after waiting 0 ms 2023-07-12 20:18:42,294 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390. 2023-07-12 20:18:42,294 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390. 2023-07-12 20:18:42,294 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for bac5381289dc4350cf863d49cca42390: 2023-07-12 20:18:42,297 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 20:18:42,298 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689193122298"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193122298"}]},"ts":"1689193122298"} 2023-07-12 20:18:42,299 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 20:18:42,301 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 20:18:42,301 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193122301"}]},"ts":"1689193122301"} 2023-07-12 20:18:42,302 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 20:18:42,309 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:42,309 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:42,309 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:42,309 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:42,309 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:42,309 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=bac5381289dc4350cf863d49cca42390, ASSIGN}] 2023-07-12 20:18:42,311 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=bac5381289dc4350cf863d49cca42390, ASSIGN 2023-07-12 20:18:42,312 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=bac5381289dc4350cf863d49cca42390, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46531,1689193120819; forceNewPlan=false, retain=false 2023-07-12 20:18:42,312 INFO [jenkins-hbase4:40547] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 20:18:42,314 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=e4a29c7853f4c649e9db75dd1eab3fe5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:42,315 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689193122314"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193122314"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193122314"}]},"ts":"1689193122314"} 2023-07-12 20:18:42,315 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=bac5381289dc4350cf863d49cca42390, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:42,315 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689193122315"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193122315"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193122315"}]},"ts":"1689193122315"} 2023-07-12 20:18:42,316 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure e4a29c7853f4c649e9db75dd1eab3fe5, server=jenkins-hbase4.apache.org,46531,1689193120819}] 2023-07-12 20:18:42,318 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure bac5381289dc4350cf863d49cca42390, server=jenkins-hbase4.apache.org,46531,1689193120819}] 2023-07-12 20:18:42,473 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5. 2023-07-12 20:18:42,473 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e4a29c7853f4c649e9db75dd1eab3fe5, NAME => 'hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:42,473 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e4a29c7853f4c649e9db75dd1eab3fe5 2023-07-12 20:18:42,474 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:42,474 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e4a29c7853f4c649e9db75dd1eab3fe5 2023-07-12 20:18:42,474 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e4a29c7853f4c649e9db75dd1eab3fe5 2023-07-12 20:18:42,475 INFO [StoreOpener-e4a29c7853f4c649e9db75dd1eab3fe5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e4a29c7853f4c649e9db75dd1eab3fe5 2023-07-12 20:18:42,476 DEBUG [StoreOpener-e4a29c7853f4c649e9db75dd1eab3fe5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/namespace/e4a29c7853f4c649e9db75dd1eab3fe5/info 2023-07-12 20:18:42,476 DEBUG [StoreOpener-e4a29c7853f4c649e9db75dd1eab3fe5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/namespace/e4a29c7853f4c649e9db75dd1eab3fe5/info 2023-07-12 20:18:42,477 INFO [StoreOpener-e4a29c7853f4c649e9db75dd1eab3fe5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e4a29c7853f4c649e9db75dd1eab3fe5 columnFamilyName info 2023-07-12 20:18:42,477 INFO [StoreOpener-e4a29c7853f4c649e9db75dd1eab3fe5-1] regionserver.HStore(310): Store=e4a29c7853f4c649e9db75dd1eab3fe5/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:42,478 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/namespace/e4a29c7853f4c649e9db75dd1eab3fe5 2023-07-12 20:18:42,478 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/namespace/e4a29c7853f4c649e9db75dd1eab3fe5 2023-07-12 20:18:42,481 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e4a29c7853f4c649e9db75dd1eab3fe5 2023-07-12 20:18:42,483 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/namespace/e4a29c7853f4c649e9db75dd1eab3fe5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:42,483 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e4a29c7853f4c649e9db75dd1eab3fe5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9539550400, jitterRate=-0.11156013607978821}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:42,483 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e4a29c7853f4c649e9db75dd1eab3fe5: 2023-07-12 20:18:42,484 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5., pid=8, masterSystemTime=1689193122469 2023-07-12 20:18:42,487 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5. 2023-07-12 20:18:42,487 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5. 2023-07-12 20:18:42,487 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390. 2023-07-12 20:18:42,487 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bac5381289dc4350cf863d49cca42390, NAME => 'hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:42,487 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=e4a29c7853f4c649e9db75dd1eab3fe5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:42,487 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 20:18:42,487 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689193122487"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193122487"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193122487"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193122487"}]},"ts":"1689193122487"} 2023-07-12 20:18:42,487 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390. service=MultiRowMutationService 2023-07-12 20:18:42,487 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 20:18:42,488 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup bac5381289dc4350cf863d49cca42390 2023-07-12 20:18:42,488 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:42,488 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bac5381289dc4350cf863d49cca42390 2023-07-12 20:18:42,488 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bac5381289dc4350cf863d49cca42390 2023-07-12 20:18:42,489 INFO [StoreOpener-bac5381289dc4350cf863d49cca42390-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region bac5381289dc4350cf863d49cca42390 2023-07-12 20:18:42,490 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-12 20:18:42,490 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure e4a29c7853f4c649e9db75dd1eab3fe5, server=jenkins-hbase4.apache.org,46531,1689193120819 in 173 msec 2023-07-12 20:18:42,491 DEBUG [StoreOpener-bac5381289dc4350cf863d49cca42390-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/rsgroup/bac5381289dc4350cf863d49cca42390/m 2023-07-12 20:18:42,491 DEBUG [StoreOpener-bac5381289dc4350cf863d49cca42390-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/rsgroup/bac5381289dc4350cf863d49cca42390/m 2023-07-12 20:18:42,491 INFO [StoreOpener-bac5381289dc4350cf863d49cca42390-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bac5381289dc4350cf863d49cca42390 columnFamilyName m 2023-07-12 20:18:42,492 INFO [StoreOpener-bac5381289dc4350cf863d49cca42390-1] regionserver.HStore(310): Store=bac5381289dc4350cf863d49cca42390/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:42,493 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/rsgroup/bac5381289dc4350cf863d49cca42390 2023-07-12 20:18:42,493 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/rsgroup/bac5381289dc4350cf863d49cca42390 2023-07-12 20:18:42,494 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-12 20:18:42,494 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e4a29c7853f4c649e9db75dd1eab3fe5, ASSIGN in 219 msec 2023-07-12 20:18:42,496 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 20:18:42,496 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193122496"}]},"ts":"1689193122496"} 2023-07-12 20:18:42,496 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bac5381289dc4350cf863d49cca42390 2023-07-12 20:18:42,497 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 20:18:42,498 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/rsgroup/bac5381289dc4350cf863d49cca42390/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:42,499 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bac5381289dc4350cf863d49cca42390; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@68b22f0b, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:42,499 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bac5381289dc4350cf863d49cca42390: 2023-07-12 20:18:42,500 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390., pid=9, masterSystemTime=1689193122469 2023-07-12 20:18:42,500 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 20:18:42,502 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390. 2023-07-12 20:18:42,502 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390. 2023-07-12 20:18:42,502 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 333 msec 2023-07-12 20:18:42,502 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=bac5381289dc4350cf863d49cca42390, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:42,503 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689193122502"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193122502"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193122502"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193122502"}]},"ts":"1689193122502"} 2023-07-12 20:18:42,505 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-12 20:18:42,505 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure bac5381289dc4350cf863d49cca42390, server=jenkins-hbase4.apache.org,46531,1689193120819 in 186 msec 2023-07-12 20:18:42,507 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-12 20:18:42,507 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=bac5381289dc4350cf863d49cca42390, ASSIGN in 196 msec 2023-07-12 20:18:42,508 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 20:18:42,508 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193122508"}]},"ts":"1689193122508"} 2023-07-12 20:18:42,509 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 20:18:42,511 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 20:18:42,512 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 274 msec 2023-07-12 20:18:42,546 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40547,1689193120644] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 20:18:42,546 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40547,1689193120644] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 20:18:42,551 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:42,551 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40547,1689193120644] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:42,553 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40547,1689193120644] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 20:18:42,554 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40547,1689193120644] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 20:18:42,572 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 20:18:42,573 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 20:18:42,574 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:42,578 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 20:18:42,585 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 20:18:42,588 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-12 20:18:42,590 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 20:18:42,596 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 20:18:42,599 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-12 20:18:42,604 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 20:18:42,606 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 20:18:42,607 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.176sec 2023-07-12 20:18:42,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-12 20:18:42,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 20:18:42,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 20:18:42,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40547,1689193120644-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 20:18:42,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40547,1689193120644-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 20:18:42,611 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 20:18:42,621 DEBUG [Listener at localhost/33473] zookeeper.ReadOnlyZKClient(139): Connect 0x527f70d6 to 127.0.0.1:58245 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:42,627 DEBUG [Listener at localhost/33473] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ed0e24a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:42,629 DEBUG [hconnection-0x58405c1f-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 20:18:42,631 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42486, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 20:18:42,633 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,40547,1689193120644 2023-07-12 20:18:42,633 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:42,640 DEBUG [Listener at localhost/33473] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 20:18:42,642 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51970, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 20:18:42,645 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 20:18:42,645 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:42,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-12 20:18:42,646 DEBUG [Listener at localhost/33473] zookeeper.ReadOnlyZKClient(139): Connect 0x54a1db90 to 127.0.0.1:58245 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:42,651 DEBUG [Listener at localhost/33473] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@e5bc5bf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:42,651 INFO [Listener at localhost/33473] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:58245 2023-07-12 20:18:42,659 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 20:18:42,660 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1015b30065e000a connected 2023-07-12 20:18:42,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:42,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:42,666 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-12 20:18:42,678 INFO [Listener at localhost/33473] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-12 20:18:42,678 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:42,678 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:42,678 INFO [Listener at localhost/33473] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 20:18:42,678 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 20:18:42,678 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 20:18:42,678 INFO [Listener at localhost/33473] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 20:18:42,679 INFO [Listener at localhost/33473] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41921 2023-07-12 20:18:42,679 INFO [Listener at localhost/33473] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 20:18:42,681 DEBUG [Listener at localhost/33473] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 20:18:42,681 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:42,682 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 20:18:42,683 INFO [Listener at localhost/33473] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41921 connecting to ZooKeeper ensemble=127.0.0.1:58245 2023-07-12 20:18:42,687 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:419210x0, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 20:18:42,690 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(162): regionserver:419210x0, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 20:18:42,691 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41921-0x1015b30065e000b connected 2023-07-12 20:18:42,691 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(162): regionserver:41921-0x1015b30065e000b, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-12 20:18:42,692 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:41921-0x1015b30065e000b, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 20:18:42,694 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41921 2023-07-12 20:18:42,695 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41921 2023-07-12 20:18:42,698 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41921 2023-07-12 20:18:42,699 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41921 2023-07-12 20:18:42,699 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41921 2023-07-12 20:18:42,701 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 20:18:42,701 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 20:18:42,701 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 20:18:42,701 INFO [Listener at localhost/33473] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 20:18:42,701 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 20:18:42,701 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 20:18:42,702 INFO [Listener at localhost/33473] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 20:18:42,702 INFO [Listener at localhost/33473] http.HttpServer(1146): Jetty bound to port 42583 2023-07-12 20:18:42,702 INFO [Listener at localhost/33473] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 20:18:42,707 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:42,707 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6c7f574d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/hadoop.log.dir/,AVAILABLE} 2023-07-12 20:18:42,707 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:42,708 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@10edd81a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-12 20:18:42,821 INFO [Listener at localhost/33473] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 20:18:42,822 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 20:18:42,822 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 20:18:42,823 INFO [Listener at localhost/33473] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 20:18:42,824 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 20:18:42,824 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2d578626{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/java.io.tmpdir/jetty-0_0_0_0-42583-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8722766307252486127/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:42,826 INFO [Listener at localhost/33473] server.AbstractConnector(333): Started ServerConnector@748ea336{HTTP/1.1, (http/1.1)}{0.0.0.0:42583} 2023-07-12 20:18:42,827 INFO [Listener at localhost/33473] server.Server(415): Started @45566ms 2023-07-12 20:18:42,829 INFO [RS:3;jenkins-hbase4:41921] regionserver.HRegionServer(951): ClusterId : 30d670ec-3811-47ae-a0f9-348192307d80 2023-07-12 20:18:42,829 DEBUG [RS:3;jenkins-hbase4:41921] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 20:18:42,831 DEBUG [RS:3;jenkins-hbase4:41921] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 20:18:42,831 DEBUG [RS:3;jenkins-hbase4:41921] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 20:18:42,833 DEBUG [RS:3;jenkins-hbase4:41921] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 20:18:42,834 DEBUG [RS:3;jenkins-hbase4:41921] zookeeper.ReadOnlyZKClient(139): Connect 0x4be2543b to 127.0.0.1:58245 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 20:18:42,841 DEBUG [RS:3;jenkins-hbase4:41921] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4525b337, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 20:18:42,841 DEBUG [RS:3;jenkins-hbase4:41921] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@411a3f4b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 20:18:42,849 DEBUG [RS:3;jenkins-hbase4:41921] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:41921 2023-07-12 20:18:42,849 INFO [RS:3;jenkins-hbase4:41921] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 20:18:42,849 INFO [RS:3;jenkins-hbase4:41921] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 20:18:42,849 DEBUG [RS:3;jenkins-hbase4:41921] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 20:18:42,850 INFO [RS:3;jenkins-hbase4:41921] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40547,1689193120644 with isa=jenkins-hbase4.apache.org/172.31.14.131:41921, startcode=1689193122677 2023-07-12 20:18:42,850 DEBUG [RS:3;jenkins-hbase4:41921] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 20:18:42,852 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36715, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 20:18:42,853 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40547] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41921,1689193122677 2023-07-12 20:18:42,853 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40547,1689193120644] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 20:18:42,853 DEBUG [RS:3;jenkins-hbase4:41921] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f 2023-07-12 20:18:42,853 DEBUG [RS:3;jenkins-hbase4:41921] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34547 2023-07-12 20:18:42,853 DEBUG [RS:3;jenkins-hbase4:41921] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41529 2023-07-12 20:18:42,857 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:42,857 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:42,857 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:42,857 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:42,858 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40547,1689193120644] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:42,858 DEBUG [RS:3;jenkins-hbase4:41921] zookeeper.ZKUtil(162): regionserver:41921-0x1015b30065e000b, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41921,1689193122677 2023-07-12 20:18:42,858 WARN [RS:3;jenkins-hbase4:41921] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 20:18:42,858 INFO [RS:3;jenkins-hbase4:41921] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 20:18:42,858 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40547,1689193120644] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 20:18:42,858 DEBUG [RS:3;jenkins-hbase4:41921] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/WALs/jenkins-hbase4.apache.org,41921,1689193122677 2023-07-12 20:18:42,858 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41921,1689193122677] 2023-07-12 20:18:42,858 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43827,1689193121229 2023-07-12 20:18:42,858 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43827,1689193121229 2023-07-12 20:18:42,859 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43827,1689193121229 2023-07-12 20:18:42,862 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38407,1689193120989 2023-07-12 20:18:42,862 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40547,1689193120644] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-12 20:18:42,862 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38407,1689193120989 2023-07-12 20:18:42,862 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38407,1689193120989 2023-07-12 20:18:42,862 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41921,1689193122677 2023-07-12 20:18:42,863 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41921,1689193122677 2023-07-12 20:18:42,863 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:42,864 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:42,864 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41921,1689193122677 2023-07-12 20:18:42,865 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:42,865 DEBUG [RS:3;jenkins-hbase4:41921] zookeeper.ZKUtil(162): regionserver:41921-0x1015b30065e000b, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43827,1689193121229 2023-07-12 20:18:42,865 DEBUG [RS:3;jenkins-hbase4:41921] zookeeper.ZKUtil(162): regionserver:41921-0x1015b30065e000b, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38407,1689193120989 2023-07-12 20:18:42,866 DEBUG [RS:3;jenkins-hbase4:41921] zookeeper.ZKUtil(162): regionserver:41921-0x1015b30065e000b, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41921,1689193122677 2023-07-12 20:18:42,866 DEBUG [RS:3;jenkins-hbase4:41921] zookeeper.ZKUtil(162): regionserver:41921-0x1015b30065e000b, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:42,867 DEBUG [RS:3;jenkins-hbase4:41921] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 20:18:42,867 INFO [RS:3;jenkins-hbase4:41921] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 20:18:42,868 INFO [RS:3;jenkins-hbase4:41921] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 20:18:42,868 INFO [RS:3;jenkins-hbase4:41921] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 20:18:42,868 INFO [RS:3;jenkins-hbase4:41921] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:42,868 INFO [RS:3;jenkins-hbase4:41921] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 20:18:42,870 INFO [RS:3;jenkins-hbase4:41921] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:42,870 DEBUG [RS:3;jenkins-hbase4:41921] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:42,870 DEBUG [RS:3;jenkins-hbase4:41921] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:42,870 DEBUG [RS:3;jenkins-hbase4:41921] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:42,870 DEBUG [RS:3;jenkins-hbase4:41921] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:42,870 DEBUG [RS:3;jenkins-hbase4:41921] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:42,870 DEBUG [RS:3;jenkins-hbase4:41921] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-12 20:18:42,870 DEBUG [RS:3;jenkins-hbase4:41921] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:42,870 DEBUG [RS:3;jenkins-hbase4:41921] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:42,870 DEBUG [RS:3;jenkins-hbase4:41921] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:42,870 DEBUG [RS:3;jenkins-hbase4:41921] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-12 20:18:42,874 INFO [RS:3;jenkins-hbase4:41921] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:42,874 INFO [RS:3;jenkins-hbase4:41921] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:42,875 INFO [RS:3;jenkins-hbase4:41921] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:42,885 INFO [RS:3;jenkins-hbase4:41921] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 20:18:42,885 INFO [RS:3;jenkins-hbase4:41921] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41921,1689193122677-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 20:18:42,895 INFO [RS:3;jenkins-hbase4:41921] regionserver.Replication(203): jenkins-hbase4.apache.org,41921,1689193122677 started 2023-07-12 20:18:42,895 INFO [RS:3;jenkins-hbase4:41921] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41921,1689193122677, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41921, sessionid=0x1015b30065e000b 2023-07-12 20:18:42,895 DEBUG [RS:3;jenkins-hbase4:41921] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 20:18:42,895 DEBUG [RS:3;jenkins-hbase4:41921] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41921,1689193122677 2023-07-12 20:18:42,895 DEBUG [RS:3;jenkins-hbase4:41921] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41921,1689193122677' 2023-07-12 20:18:42,896 DEBUG [RS:3;jenkins-hbase4:41921] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 20:18:42,896 DEBUG [RS:3;jenkins-hbase4:41921] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 20:18:42,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:42,896 DEBUG [RS:3;jenkins-hbase4:41921] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 20:18:42,896 DEBUG [RS:3;jenkins-hbase4:41921] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 20:18:42,896 DEBUG [RS:3;jenkins-hbase4:41921] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41921,1689193122677 2023-07-12 20:18:42,896 DEBUG [RS:3;jenkins-hbase4:41921] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41921,1689193122677' 2023-07-12 20:18:42,896 DEBUG [RS:3;jenkins-hbase4:41921] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 20:18:42,897 DEBUG [RS:3;jenkins-hbase4:41921] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 20:18:42,897 DEBUG [RS:3;jenkins-hbase4:41921] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 20:18:42,897 INFO [RS:3;jenkins-hbase4:41921] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 20:18:42,897 INFO [RS:3;jenkins-hbase4:41921] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 20:18:42,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:42,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:42,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:42,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:42,909 DEBUG [hconnection-0x6efbb307-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 20:18:42,910 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42500, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 20:18:42,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:42,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:42,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40547] to rsgroup master 2023-07-12 20:18:42,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:42,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:51970 deadline: 1689194322919, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. 2023-07-12 20:18:42,919 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:42,921 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:42,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:42,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:42,922 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38407, jenkins-hbase4.apache.org:41921, jenkins-hbase4.apache.org:43827, jenkins-hbase4.apache.org:46531], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:42,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:42,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:42,979 INFO [Listener at localhost/33473] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=554 (was 504) Potentially hanging thread: hconnection-0x49393020-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 33473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: hconnection-0x6efbb307-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 44023 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Session-HouseKeeper-5ecbfcb6-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x49393020-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp343539814-2308 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 44023 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1798300118-2290-acceptor-0@62058809-ServerConnector@71a4cb2f{HTTP/1.1, (http/1.1)}{0.0.0.0:46293} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp567954837-2261 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33473.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x527f70d6-SendThread(127.0.0.1:58245) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x49393020-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:34547 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33473-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52715@0x6aa639d0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/230090295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33473-SendThread(127.0.0.1:58245) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1574043798_17 at /127.0.0.1:47390 [Receiving block BP-1220816525-172.31.14.131-1689193119866:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@66b1b46a java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1220816525-172.31.14.131-1689193119866:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-714347317_17 at /127.0.0.1:47408 [Receiving block BP-1220816525-172.31.14.131-1689193119866:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:41921-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-545-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: Listener at localhost/33473-SendThread(127.0.0.1:58245) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 3 on default port 34547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/33473.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: PacketResponder: BP-1220816525-172.31.14.131-1689193119866:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:43827Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 34547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp567954837-2264 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 46429 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:43827 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33473.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41921 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 46429 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52715@0x6aa639d0-SendThread(127.0.0.1:52715) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp394559044-2235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34685,1689193115094 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1607301231_17 at /127.0.0.1:43016 [Receiving block BP-1220816525-172.31.14.131-1689193119866:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1840207152@qtp-1100839517-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:41921Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2111656738-2198 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:38407 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33473-SendThread(127.0.0.1:58245) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-559-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp343539814-2306 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:33535 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x53d004a6-SendThread(127.0.0.1:58245) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 46429 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1798300118-2295 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1220816525-172.31.14.131-1689193119866:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:1;jenkins-hbase4:38407-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41921 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:33535 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: 181135888@qtp-1564104209-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45427 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp343539814-2301 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33473 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data2/current/BP-1220816525-172.31.14.131-1689193119866 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@3d38750a java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 34547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1220816525-172.31.14.131-1689193119866:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41921 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@58da0e java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x2695aa84 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/230090295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 33473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: jenkins-hbase4:46531Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:34547 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp343539814-2302 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1178318477_17 at /127.0.0.1:53364 [Receiving block BP-1220816525-172.31.14.131-1689193119866:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1607301231_17 at /127.0.0.1:53380 [Receiving block BP-1220816525-172.31.14.131-1689193119866:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41921 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp343539814-2307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp394559044-2234 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x22d656d3-SendThread(127.0.0.1:58245) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41921 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/38141-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41921 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: 946003280@qtp-1564104209-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1607301231_17 at /127.0.0.1:47434 [Receiving block BP-1220816525-172.31.14.131-1689193119866:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1574043798_17 at /127.0.0.1:42970 [Receiving block BP-1220816525-172.31.14.131-1689193119866:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:46531 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:40547 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:34547 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp394559044-2231 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x49393020-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(1770960085) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: IPC Server idle connection scanner for port 33473 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp394559044-2229 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1607301231_17 at /127.0.0.1:47438 [Receiving block BP-1220816525-172.31.14.131-1689193119866:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1607301231_17 at /127.0.0.1:43018 [Receiving block BP-1220816525-172.31.14.131-1689193119866:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp971341067-2570 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data6/current/BP-1220816525-172.31.14.131-1689193119866 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 33473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp971341067-2565 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-544-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33473-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1607301231_17 at /127.0.0.1:53376 [Receiving block BP-1220816525-172.31.14.131-1689193119866:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@6240751f java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1574043798_17 at /127.0.0.1:47362 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x527f70d6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/230090295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp567954837-2262 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2111656738-2200 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:33535 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x4be2543b-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-540-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1220816525-172.31.14.131-1689193119866:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-538-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-7cbca702-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp971341067-2567 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 33473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f-prefix:jenkins-hbase4.apache.org,46531,1689193120819.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1607301231_17 at /127.0.0.1:53410 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 46429 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1476039184@qtp-1100839517-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36741 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-550-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:34547 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:34547 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: BP-1220816525-172.31.14.131-1689193119866 heartbeating to localhost/127.0.0.1:34547 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x74c48c7f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/230090295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41921 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp394559044-2236 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@76626a87 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1220816525-172.31.14.131-1689193119866 heartbeating to localhost/127.0.0.1:34547 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: jenkins-hbase4:38407Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33473.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp343539814-2305-acceptor-0@561f31ec-ServerConnector@1abdfc99{HTTP/1.1, (http/1.1)}{0.0.0.0:44627} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1798300118-2294 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp971341067-2568 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x49393020-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp971341067-2572 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:58245): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: RS:0;jenkins-hbase4:46531-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp971341067-2571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:33535 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:33535 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-558-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp343539814-2303 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41921 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1220816525-172.31.14.131-1689193119866:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x54a1db90-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-714347317_17 at /127.0.0.1:42996 [Receiving block BP-1220816525-172.31.14.131-1689193119866:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-f50000-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 44023 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data3/current/BP-1220816525-172.31.14.131-1689193119866 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33473-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 44023 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x22d656d3-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:33535 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@29cae0f5 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2111656738-2203 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:34547 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1574043798_17 at /127.0.0.1:53320 [Receiving block BP-1220816525-172.31.14.131-1689193119866:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x22d656d3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/230090295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp567954837-2259 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp394559044-2230-acceptor-0@53bec78d-ServerConnector@eea127f{HTTP/1.1, (http/1.1)}{0.0.0.0:45225} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52715@0x6aa639d0-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp971341067-2566-acceptor-0@7a866d28-ServerConnector@748ea336{HTTP/1.1, (http/1.1)}{0.0.0.0:42583} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x53d004a6-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x74c48c7f-SendThread(127.0.0.1:58245) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@2419bcf3[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp567954837-2265 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp567954837-2260-acceptor-0@10e5cb6e-ServerConnector@2492816c{HTTP/1.1, (http/1.1)}{0.0.0.0:43257} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1798300118-2292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2111656738-2199-acceptor-0@52425b56-ServerConnector@79468c31{HTTP/1.1, (http/1.1)}{0.0.0.0:41529} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1220816525-172.31.14.131-1689193119866:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 44023 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:34547 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1656447072@qtp-1801763390-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp343539814-2304 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x4be2543b sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/230090295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33473-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1220816525-172.31.14.131-1689193119866:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1178318477_17 at /127.0.0.1:47418 [Receiving block BP-1220816525-172.31.14.131-1689193119866:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:34547 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp2111656738-2205 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1798300118-2291 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@431080c9 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x54a1db90-SendThread(127.0.0.1:58245) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1798300118-2289 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: PacketResponder: BP-1220816525-172.31.14.131-1689193119866:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f-prefix:jenkins-hbase4.apache.org,43827,1689193121229 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1220816525-172.31.14.131-1689193119866:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33473-SendThread(127.0.0.1:58245) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data1/current/BP-1220816525-172.31.14.131-1689193119866 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/33473-SendThread(127.0.0.1:58245) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 2 on default port 34547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp971341067-2569 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x4be2543b-SendThread(127.0.0.1:58245) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp567954837-2266 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x58405c1f-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data5/current/BP-1220816525-172.31.14.131-1689193119866 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33473-SendThread(127.0.0.1:58245) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1220816525-172.31.14.131-1689193119866:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1220816525-172.31.14.131-1689193119866 heartbeating to localhost/127.0.0.1:34547 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@3d85fcf4 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:33535 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:43827-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f-prefix:jenkins-hbase4.apache.org,46531,1689193120819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 34547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x49393020-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x527f70d6-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41921 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-1d5b597-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1798300118-2293 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-5ef7553-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1220816525-172.31.14.131-1689193119866:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@a284561[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-554-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:33535 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:34547 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1178318477_17 at /127.0.0.1:43012 [Receiving block BP-1220816525-172.31.14.131-1689193119866:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1220816525-172.31.14.131-1689193119866:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@23efee4b sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:3;jenkins-hbase4:41921 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server idle connection scanner for port 34547 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/33473-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: hconnection-0x49393020-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@5489af52 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@7a6a68cd[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 44023 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-714347317_17 at /127.0.0.1:53350 [Receiving block BP-1220816525-172.31.14.131-1689193119866:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1405075649@qtp-1800111825-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x2695aa84-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1220816525-172.31.14.131-1689193119866:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:40547 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: qtp2111656738-2202 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6efbb307-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1220816525-172.31.14.131-1689193119866:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: hconnection-0x49393020-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-563-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data4/current/BP-1220816525-172.31.14.131-1689193119866 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-549-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1607301231_17 at /127.0.0.1:42952 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 33473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41921 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40547,1689193120644 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f-prefix:jenkins-hbase4.apache.org,38407,1689193120989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689193121667 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: qtp1798300118-2296 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:34547 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689193121668 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@59f1a628 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 116826433@qtp-1801763390-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42309 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@508c0d88 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp394559044-2233 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x53d004a6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/230090295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@3031b41f java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/38141-SendThread(127.0.0.1:52715) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: IPC Server idle connection scanner for port 46429 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x54a1db90 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/230090295.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/MasterData-prefix:jenkins-hbase4.apache.org,40547,1689193120644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2111656738-2201 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 526077057@qtp-1800111825-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35607 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x74c48c7f-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 46429 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:33535 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp2111656738-2204 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp567954837-2263 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58245@0x2695aa84-SendThread(127.0.0.1:58245) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/33473-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp394559044-2232 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:58245 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) - Thread LEAK? -, OpenFileDescriptor=829 (was 777) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=543 (was 558), ProcessCount=171 (was 172), AvailableMemoryMB=6399 (was 6250) - AvailableMemoryMB LEAK? - 2023-07-12 20:18:42,983 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=554 is superior to 500 2023-07-12 20:18:43,000 INFO [RS:3;jenkins-hbase4:41921] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41921%2C1689193122677, suffix=, logDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/WALs/jenkins-hbase4.apache.org,41921,1689193122677, archiveDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/oldWALs, maxLogs=32 2023-07-12 20:18:43,001 INFO [Listener at localhost/33473] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=554, OpenFileDescriptor=829, MaxFileDescriptor=60000, SystemLoadAverage=543, ProcessCount=171, AvailableMemoryMB=6398 2023-07-12 20:18:43,001 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=554 is superior to 500 2023-07-12 20:18:43,001 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-12 20:18:43,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:43,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:43,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:43,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:43,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:43,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:43,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:43,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:43,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:43,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:43,021 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45133,DS-919165da-60cb-4c5a-8bd8-d6703428735f,DISK] 2023-07-12 20:18:43,021 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35755,DS-4b03354c-de95-4316-9f8f-31dabd8277ba,DISK] 2023-07-12 20:18:43,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:43,021 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33003,DS-46809008-3e07-4a08-8ec7-7e450c28f5a1,DISK] 2023-07-12 20:18:43,024 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:43,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:43,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:43,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:43,028 INFO [RS:3;jenkins-hbase4:41921] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/WALs/jenkins-hbase4.apache.org,41921,1689193122677/jenkins-hbase4.apache.org%2C41921%2C1689193122677.1689193123000 2023-07-12 20:18:43,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:43,030 DEBUG [RS:3;jenkins-hbase4:41921] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45133,DS-919165da-60cb-4c5a-8bd8-d6703428735f,DISK], DatanodeInfoWithStorage[127.0.0.1:33003,DS-46809008-3e07-4a08-8ec7-7e450c28f5a1,DISK], DatanodeInfoWithStorage[127.0.0.1:35755,DS-4b03354c-de95-4316-9f8f-31dabd8277ba,DISK]] 2023-07-12 20:18:43,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:43,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:43,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:43,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40547] to rsgroup master 2023-07-12 20:18:43,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:43,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:51970 deadline: 1689194323035, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. 2023-07-12 20:18:43,036 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:43,037 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:43,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:43,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:43,038 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38407, jenkins-hbase4.apache.org:41921, jenkins-hbase4.apache.org:43827, jenkins-hbase4.apache.org:46531], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:43,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:43,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:43,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:43,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-12 20:18:43,042 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:43,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-12 20:18:43,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 20:18:43,044 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:43,044 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:43,045 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:43,047 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 20:18:43,048 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp/data/default/t1/08fcf9e9ff78031b85aac84e3eba0873 2023-07-12 20:18:43,048 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp/data/default/t1/08fcf9e9ff78031b85aac84e3eba0873 empty. 2023-07-12 20:18:43,049 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp/data/default/t1/08fcf9e9ff78031b85aac84e3eba0873 2023-07-12 20:18:43,049 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-12 20:18:43,063 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-12 20:18:43,064 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 08fcf9e9ff78031b85aac84e3eba0873, NAME => 't1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp 2023-07-12 20:18:43,073 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:43,073 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 08fcf9e9ff78031b85aac84e3eba0873, disabling compactions & flushes 2023-07-12 20:18:43,073 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873. 2023-07-12 20:18:43,073 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873. 2023-07-12 20:18:43,073 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873. after waiting 0 ms 2023-07-12 20:18:43,073 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873. 2023-07-12 20:18:43,073 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873. 2023-07-12 20:18:43,073 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 08fcf9e9ff78031b85aac84e3eba0873: 2023-07-12 20:18:43,076 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 20:18:43,076 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689193123076"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193123076"}]},"ts":"1689193123076"} 2023-07-12 20:18:43,078 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 20:18:43,078 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 20:18:43,079 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193123078"}]},"ts":"1689193123078"} 2023-07-12 20:18:43,080 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-12 20:18:43,083 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-12 20:18:43,083 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 20:18:43,084 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 20:18:43,084 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 20:18:43,084 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 20:18:43,084 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 20:18:43,084 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=08fcf9e9ff78031b85aac84e3eba0873, ASSIGN}] 2023-07-12 20:18:43,085 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=08fcf9e9ff78031b85aac84e3eba0873, ASSIGN 2023-07-12 20:18:43,085 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=08fcf9e9ff78031b85aac84e3eba0873, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46531,1689193120819; forceNewPlan=false, retain=false 2023-07-12 20:18:43,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 20:18:43,235 INFO [jenkins-hbase4:40547] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 20:18:43,237 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=08fcf9e9ff78031b85aac84e3eba0873, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:43,237 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689193123237"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193123237"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193123237"}]},"ts":"1689193123237"} 2023-07-12 20:18:43,239 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 08fcf9e9ff78031b85aac84e3eba0873, server=jenkins-hbase4.apache.org,46531,1689193120819}] 2023-07-12 20:18:43,321 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 20:18:43,321 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-12 20:18:43,321 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 20:18:43,321 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-12 20:18:43,321 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 20:18:43,321 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-12 20:18:43,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 20:18:43,394 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873. 2023-07-12 20:18:43,394 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 08fcf9e9ff78031b85aac84e3eba0873, NAME => 't1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873.', STARTKEY => '', ENDKEY => ''} 2023-07-12 20:18:43,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 08fcf9e9ff78031b85aac84e3eba0873 2023-07-12 20:18:43,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 20:18:43,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 08fcf9e9ff78031b85aac84e3eba0873 2023-07-12 20:18:43,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 08fcf9e9ff78031b85aac84e3eba0873 2023-07-12 20:18:43,396 INFO [StoreOpener-08fcf9e9ff78031b85aac84e3eba0873-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 08fcf9e9ff78031b85aac84e3eba0873 2023-07-12 20:18:43,397 DEBUG [StoreOpener-08fcf9e9ff78031b85aac84e3eba0873-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/default/t1/08fcf9e9ff78031b85aac84e3eba0873/cf1 2023-07-12 20:18:43,397 DEBUG [StoreOpener-08fcf9e9ff78031b85aac84e3eba0873-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/default/t1/08fcf9e9ff78031b85aac84e3eba0873/cf1 2023-07-12 20:18:43,398 INFO [StoreOpener-08fcf9e9ff78031b85aac84e3eba0873-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 08fcf9e9ff78031b85aac84e3eba0873 columnFamilyName cf1 2023-07-12 20:18:43,398 INFO [StoreOpener-08fcf9e9ff78031b85aac84e3eba0873-1] regionserver.HStore(310): Store=08fcf9e9ff78031b85aac84e3eba0873/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 20:18:43,399 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/default/t1/08fcf9e9ff78031b85aac84e3eba0873 2023-07-12 20:18:43,399 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/default/t1/08fcf9e9ff78031b85aac84e3eba0873 2023-07-12 20:18:43,402 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 08fcf9e9ff78031b85aac84e3eba0873 2023-07-12 20:18:43,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/default/t1/08fcf9e9ff78031b85aac84e3eba0873/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 20:18:43,404 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 08fcf9e9ff78031b85aac84e3eba0873; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10748374080, jitterRate=0.0010203421115875244}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 20:18:43,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 08fcf9e9ff78031b85aac84e3eba0873: 2023-07-12 20:18:43,405 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873., pid=14, masterSystemTime=1689193123390 2023-07-12 20:18:43,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873. 2023-07-12 20:18:43,406 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873. 2023-07-12 20:18:43,407 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=08fcf9e9ff78031b85aac84e3eba0873, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:43,407 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689193123407"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689193123407"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689193123407"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689193123407"}]},"ts":"1689193123407"} 2023-07-12 20:18:43,409 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-12 20:18:43,409 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 08fcf9e9ff78031b85aac84e3eba0873, server=jenkins-hbase4.apache.org,46531,1689193120819 in 169 msec 2023-07-12 20:18:43,411 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-12 20:18:43,411 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=08fcf9e9ff78031b85aac84e3eba0873, ASSIGN in 325 msec 2023-07-12 20:18:43,411 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 20:18:43,411 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193123411"}]},"ts":"1689193123411"} 2023-07-12 20:18:43,415 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-12 20:18:43,418 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 20:18:43,419 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 378 msec 2023-07-12 20:18:43,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 20:18:43,647 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-12 20:18:43,647 DEBUG [Listener at localhost/33473] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-12 20:18:43,647 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:43,649 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-12 20:18:43,649 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:43,649 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-12 20:18:43,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 20:18:43,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-12 20:18:43,653 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 20:18:43,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-12 20:18:43,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.14.131:51970 deadline: 1689193183650, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-12 20:18:43,656 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:43,658 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-12 20:18:43,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:43,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:43,758 INFO [Listener at localhost/33473] client.HBaseAdmin$15(890): Started disable of t1 2023-07-12 20:18:43,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-12 20:18:43,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-12 20:18:43,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 20:18:43,762 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193123762"}]},"ts":"1689193123762"} 2023-07-12 20:18:43,763 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-12 20:18:43,765 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-12 20:18:43,766 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=08fcf9e9ff78031b85aac84e3eba0873, UNASSIGN}] 2023-07-12 20:18:43,766 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=08fcf9e9ff78031b85aac84e3eba0873, UNASSIGN 2023-07-12 20:18:43,767 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=08fcf9e9ff78031b85aac84e3eba0873, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:43,767 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689193123767"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689193123767"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689193123767"}]},"ts":"1689193123767"} 2023-07-12 20:18:43,768 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 08fcf9e9ff78031b85aac84e3eba0873, server=jenkins-hbase4.apache.org,46531,1689193120819}] 2023-07-12 20:18:43,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 20:18:43,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 08fcf9e9ff78031b85aac84e3eba0873 2023-07-12 20:18:43,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 08fcf9e9ff78031b85aac84e3eba0873, disabling compactions & flushes 2023-07-12 20:18:43,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873. 2023-07-12 20:18:43,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873. 2023-07-12 20:18:43,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873. after waiting 0 ms 2023-07-12 20:18:43,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873. 2023-07-12 20:18:43,924 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/default/t1/08fcf9e9ff78031b85aac84e3eba0873/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 20:18:43,924 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873. 2023-07-12 20:18:43,924 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 08fcf9e9ff78031b85aac84e3eba0873: 2023-07-12 20:18:43,926 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 08fcf9e9ff78031b85aac84e3eba0873 2023-07-12 20:18:43,926 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=08fcf9e9ff78031b85aac84e3eba0873, regionState=CLOSED 2023-07-12 20:18:43,926 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689193123926"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689193123926"}]},"ts":"1689193123926"} 2023-07-12 20:18:43,929 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-12 20:18:43,929 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 08fcf9e9ff78031b85aac84e3eba0873, server=jenkins-hbase4.apache.org,46531,1689193120819 in 160 msec 2023-07-12 20:18:43,930 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-12 20:18:43,930 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=08fcf9e9ff78031b85aac84e3eba0873, UNASSIGN in 163 msec 2023-07-12 20:18:43,931 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689193123931"}]},"ts":"1689193123931"} 2023-07-12 20:18:43,932 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-12 20:18:43,935 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-12 20:18:43,936 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 176 msec 2023-07-12 20:18:44,064 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 20:18:44,064 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-12 20:18:44,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-12 20:18:44,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-12 20:18:44,068 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-12 20:18:44,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-12 20:18:44,069 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-12 20:18:44,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:44,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:44,073 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp/data/default/t1/08fcf9e9ff78031b85aac84e3eba0873 2023-07-12 20:18:44,075 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp/data/default/t1/08fcf9e9ff78031b85aac84e3eba0873/cf1, FileablePath, hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp/data/default/t1/08fcf9e9ff78031b85aac84e3eba0873/recovered.edits] 2023-07-12 20:18:44,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 20:18:44,081 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp/data/default/t1/08fcf9e9ff78031b85aac84e3eba0873/recovered.edits/4.seqid to hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/archive/data/default/t1/08fcf9e9ff78031b85aac84e3eba0873/recovered.edits/4.seqid 2023-07-12 20:18:44,081 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/.tmp/data/default/t1/08fcf9e9ff78031b85aac84e3eba0873 2023-07-12 20:18:44,081 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-12 20:18:44,084 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-12 20:18:44,086 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-12 20:18:44,087 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-12 20:18:44,088 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-12 20:18:44,088 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-12 20:18:44,088 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689193124088"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:44,090 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 20:18:44,090 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 08fcf9e9ff78031b85aac84e3eba0873, NAME => 't1,,1689193123040.08fcf9e9ff78031b85aac84e3eba0873.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 20:18:44,090 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-12 20:18:44,090 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689193124090"}]},"ts":"9223372036854775807"} 2023-07-12 20:18:44,092 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-12 20:18:44,094 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-12 20:18:44,095 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 29 msec 2023-07-12 20:18:44,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 20:18:44,182 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-12 20:18:44,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:44,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:44,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:44,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:44,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:44,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:44,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:44,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:44,206 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:44,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:44,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:44,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:44,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:44,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40547] to rsgroup master 2023-07-12 20:18:44,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:44,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:51970 deadline: 1689194324221, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. 2023-07-12 20:18:44,222 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:44,225 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:44,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,226 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38407, jenkins-hbase4.apache.org:41921, jenkins-hbase4.apache.org:43827, jenkins-hbase4.apache.org:46531], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:44,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:44,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:44,257 INFO [Listener at localhost/33473] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=569 (was 554) - Thread LEAK? -, OpenFileDescriptor=841 (was 829) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=543 (was 543), ProcessCount=171 (was 171), AvailableMemoryMB=6378 (was 6398) 2023-07-12 20:18:44,257 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-12 20:18:44,276 INFO [Listener at localhost/33473] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=569, OpenFileDescriptor=841, MaxFileDescriptor=60000, SystemLoadAverage=543, ProcessCount=171, AvailableMemoryMB=6377 2023-07-12 20:18:44,277 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-12 20:18:44,277 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-12 20:18:44,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:44,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:44,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:44,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:44,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:44,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:44,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:44,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:44,291 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:44,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:44,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:44,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:44,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:44,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40547] to rsgroup master 2023-07-12 20:18:44,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:44,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51970 deadline: 1689194324302, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. 2023-07-12 20:18:44,302 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:44,304 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:44,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,305 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38407, jenkins-hbase4.apache.org:41921, jenkins-hbase4.apache.org:43827, jenkins-hbase4.apache.org:46531], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:44,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:44,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:44,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-12 20:18:44,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 20:18:44,308 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-12 20:18:44,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-12 20:18:44,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 20:18:44,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:44,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:44,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:44,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:44,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:44,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:44,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:44,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:44,328 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:44,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:44,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:44,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:44,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:44,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40547] to rsgroup master 2023-07-12 20:18:44,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:44,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51970 deadline: 1689194324338, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. 2023-07-12 20:18:44,339 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:44,341 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:44,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,342 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38407, jenkins-hbase4.apache.org:41921, jenkins-hbase4.apache.org:43827, jenkins-hbase4.apache.org:46531], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:44,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:44,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:44,367 INFO [Listener at localhost/33473] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=571 (was 569) - Thread LEAK? -, OpenFileDescriptor=841 (was 841), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=543 (was 543), ProcessCount=171 (was 171), AvailableMemoryMB=6377 (was 6377) 2023-07-12 20:18:44,367 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-12 20:18:44,389 INFO [Listener at localhost/33473] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=571, OpenFileDescriptor=841, MaxFileDescriptor=60000, SystemLoadAverage=543, ProcessCount=171, AvailableMemoryMB=6375 2023-07-12 20:18:44,389 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-12 20:18:44,389 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-12 20:18:44,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:44,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:44,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:44,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:44,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:44,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:44,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:44,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:44,405 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:44,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:44,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:44,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:44,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:44,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40547] to rsgroup master 2023-07-12 20:18:44,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:44,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51970 deadline: 1689194324416, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. 2023-07-12 20:18:44,417 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:44,419 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:44,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,420 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38407, jenkins-hbase4.apache.org:41921, jenkins-hbase4.apache.org:43827, jenkins-hbase4.apache.org:46531], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:44,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:44,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:44,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:44,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:44,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:44,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:44,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:44,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:44,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:44,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:44,441 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:44,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:44,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:44,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:44,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:44,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40547] to rsgroup master 2023-07-12 20:18:44,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:44,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51970 deadline: 1689194324450, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. 2023-07-12 20:18:44,451 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:44,453 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:44,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,454 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38407, jenkins-hbase4.apache.org:41921, jenkins-hbase4.apache.org:43827, jenkins-hbase4.apache.org:46531], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:44,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:44,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:44,476 INFO [Listener at localhost/33473] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=572 (was 571) - Thread LEAK? -, OpenFileDescriptor=841 (was 841), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=543 (was 543), ProcessCount=171 (was 171), AvailableMemoryMB=6371 (was 6375) 2023-07-12 20:18:44,477 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-12 20:18:44,500 INFO [Listener at localhost/33473] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=572, OpenFileDescriptor=841, MaxFileDescriptor=60000, SystemLoadAverage=543, ProcessCount=171, AvailableMemoryMB=6371 2023-07-12 20:18:44,500 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-12 20:18:44,501 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-12 20:18:44,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:44,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:44,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:44,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:44,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:44,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:44,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:44,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:44,516 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:44,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:44,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:44,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:44,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:44,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40547] to rsgroup master 2023-07-12 20:18:44,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:44,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51970 deadline: 1689194324532, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. 2023-07-12 20:18:44,533 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:44,535 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:44,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,536 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38407, jenkins-hbase4.apache.org:41921, jenkins-hbase4.apache.org:43827, jenkins-hbase4.apache.org:46531], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:44,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:44,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:44,537 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-12 20:18:44,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-12 20:18:44,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-12 20:18:44,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:44,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 20:18:44,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:44,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-12 20:18:44,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-12 20:18:44,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 20:18:44,561 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 20:18:44,565 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 12 msec 2023-07-12 20:18:44,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 20:18:44,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-12 20:18:44,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:44,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:51970 deadline: 1689194324658, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-12 20:18:44,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-12 20:18:44,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-12 20:18:44,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-12 20:18:44,691 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-12 20:18:44,692 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 23 msec 2023-07-12 20:18:44,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-12 20:18:44,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-12 20:18:44,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-12 20:18:44,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-12 20:18:44,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:44,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 20:18:44,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:44,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-12 20:18:44,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 20:18:44,805 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 20:18:44,808 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 20:18:44,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-12 20:18:44,810 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 20:18:44,812 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-12 20:18:44,812 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 20:18:44,812 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 20:18:44,815 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 20:18:44,816 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-12 20:18:44,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-12 20:18:44,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-12 20:18:44,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-12 20:18:44,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:44,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 20:18:44,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:44,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:44,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:51970 deadline: 1689193184922, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-12 20:18:44,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:44,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:44,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:44,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:44,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:44,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-12 20:18:44,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:44,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 20:18:44,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:44,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-12 20:18:44,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 20:18:44,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-12 20:18:44,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-12 20:18:44,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-12 20:18:44,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-12 20:18:44,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 20:18:44,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 20:18:44,942 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 20:18:44,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-12 20:18:44,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 20:18:44,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 20:18:44,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 20:18:44,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 20:18:44,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40547] to rsgroup master 2023-07-12 20:18:44,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 20:18:44,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:51970 deadline: 1689194324960, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. 2023-07-12 20:18:44,961 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40547 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 20:18:44,963 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 20:18:44,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-12 20:18:44,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 20:18:44,964 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38407, jenkins-hbase4.apache.org:41921, jenkins-hbase4.apache.org:43827, jenkins-hbase4.apache.org:46531], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 20:18:44,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-12 20:18:44,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 20:18:44,994 INFO [Listener at localhost/33473] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=571 (was 572), OpenFileDescriptor=836 (was 841), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=543 (was 543), ProcessCount=171 (was 171), AvailableMemoryMB=6334 (was 6371) 2023-07-12 20:18:44,994 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-12 20:18:44,994 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 20:18:44,994 INFO [Listener at localhost/33473] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 20:18:44,995 DEBUG [Listener at localhost/33473] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x527f70d6 to 127.0.0.1:58245 2023-07-12 20:18:44,995 DEBUG [Listener at localhost/33473] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:44,995 DEBUG [Listener at localhost/33473] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 20:18:44,995 DEBUG [Listener at localhost/33473] util.JVMClusterUtil(257): Found active master hash=1078831141, stopped=false 2023-07-12 20:18:44,995 DEBUG [Listener at localhost/33473] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 20:18:44,995 DEBUG [Listener at localhost/33473] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 20:18:44,995 INFO [Listener at localhost/33473] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,40547,1689193120644 2023-07-12 20:18:44,997 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41921-0x1015b30065e000b, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:44,997 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:44,997 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:44,997 INFO [Listener at localhost/33473] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 20:18:44,997 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:44,997 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 20:18:44,997 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:44,998 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41921-0x1015b30065e000b, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:44,998 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:44,998 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:44,998 DEBUG [Listener at localhost/33473] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2695aa84 to 127.0.0.1:58245 2023-07-12 20:18:44,998 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:44,998 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 20:18:44,998 DEBUG [Listener at localhost/33473] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:44,999 INFO [Listener at localhost/33473] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46531,1689193120819' ***** 2023-07-12 20:18:44,999 INFO [Listener at localhost/33473] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 20:18:44,999 INFO [Listener at localhost/33473] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38407,1689193120989' ***** 2023-07-12 20:18:44,999 INFO [Listener at localhost/33473] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 20:18:44,999 INFO [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 20:18:44,999 INFO [Listener at localhost/33473] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43827,1689193121229' ***** 2023-07-12 20:18:44,999 INFO [Listener at localhost/33473] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 20:18:44,999 INFO [RS:1;jenkins-hbase4:38407] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 20:18:44,999 INFO [RS:2;jenkins-hbase4:43827] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 20:18:44,999 INFO [Listener at localhost/33473] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41921,1689193122677' ***** 2023-07-12 20:18:45,003 INFO [Listener at localhost/33473] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 20:18:45,003 INFO [RS:3;jenkins-hbase4:41921] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 20:18:45,008 INFO [RS:0;jenkins-hbase4:46531] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@61547d7e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:45,008 INFO [RS:3;jenkins-hbase4:41921] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2d578626{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:45,008 INFO [RS:0;jenkins-hbase4:46531] server.AbstractConnector(383): Stopped ServerConnector@eea127f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 20:18:45,008 INFO [RS:1;jenkins-hbase4:38407] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4446b6b9{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:45,009 INFO [RS:3;jenkins-hbase4:41921] server.AbstractConnector(383): Stopped ServerConnector@748ea336{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 20:18:45,009 INFO [RS:0;jenkins-hbase4:46531] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 20:18:45,010 INFO [RS:2;jenkins-hbase4:43827] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@688fd12e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-12 20:18:45,010 INFO [RS:1;jenkins-hbase4:38407] server.AbstractConnector(383): Stopped ServerConnector@2492816c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 20:18:45,011 INFO [RS:0;jenkins-hbase4:46531] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5f945bd4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 20:18:45,009 INFO [RS:3;jenkins-hbase4:41921] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 20:18:45,012 INFO [RS:2;jenkins-hbase4:43827] server.AbstractConnector(383): Stopped ServerConnector@71a4cb2f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 20:18:45,011 INFO [RS:1;jenkins-hbase4:38407] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 20:18:45,012 INFO [RS:2;jenkins-hbase4:43827] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 20:18:45,012 INFO [RS:0;jenkins-hbase4:46531] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1e83c95c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/hadoop.log.dir/,STOPPED} 2023-07-12 20:18:45,013 INFO [RS:3;jenkins-hbase4:41921] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@10edd81a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 20:18:45,014 INFO [RS:0;jenkins-hbase4:46531] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 20:18:45,014 INFO [RS:0;jenkins-hbase4:46531] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 20:18:45,014 INFO [RS:1;jenkins-hbase4:38407] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3afce3b9{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 20:18:45,014 INFO [RS:0;jenkins-hbase4:46531] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 20:18:45,015 INFO [RS:1;jenkins-hbase4:38407] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@21668f73{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/hadoop.log.dir/,STOPPED} 2023-07-12 20:18:45,015 INFO [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(3305): Received CLOSE for e4a29c7853f4c649e9db75dd1eab3fe5 2023-07-12 20:18:45,015 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 20:18:45,015 INFO [RS:3;jenkins-hbase4:41921] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6c7f574d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/hadoop.log.dir/,STOPPED} 2023-07-12 20:18:45,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e4a29c7853f4c649e9db75dd1eab3fe5, disabling compactions & flushes 2023-07-12 20:18:45,015 INFO [RS:2;jenkins-hbase4:43827] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@379a17d4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 20:18:45,015 INFO [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(3305): Received CLOSE for bac5381289dc4350cf863d49cca42390 2023-07-12 20:18:45,016 INFO [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:45,016 DEBUG [RS:0;jenkins-hbase4:46531] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x53d004a6 to 127.0.0.1:58245 2023-07-12 20:18:45,016 DEBUG [RS:0;jenkins-hbase4:46531] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:45,016 INFO [RS:0;jenkins-hbase4:46531] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 20:18:45,016 INFO [RS:0;jenkins-hbase4:46531] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 20:18:45,016 INFO [RS:0;jenkins-hbase4:46531] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 20:18:45,016 INFO [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 20:18:45,015 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5. 2023-07-12 20:18:45,017 INFO [RS:3;jenkins-hbase4:41921] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 20:18:45,017 INFO [RS:2;jenkins-hbase4:43827] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@8b7f666{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/hadoop.log.dir/,STOPPED} 2023-07-12 20:18:45,018 INFO [RS:3;jenkins-hbase4:41921] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 20:18:45,018 INFO [RS:3;jenkins-hbase4:41921] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 20:18:45,018 INFO [RS:3;jenkins-hbase4:41921] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41921,1689193122677 2023-07-12 20:18:45,018 DEBUG [RS:3;jenkins-hbase4:41921] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4be2543b to 127.0.0.1:58245 2023-07-12 20:18:45,018 DEBUG [RS:3;jenkins-hbase4:41921] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:45,018 INFO [RS:3;jenkins-hbase4:41921] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41921,1689193122677; all regions closed. 2023-07-12 20:18:45,016 INFO [RS:1;jenkins-hbase4:38407] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 20:18:45,018 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 20:18:45,018 INFO [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-12 20:18:45,017 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5. 2023-07-12 20:18:45,018 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 20:18:45,018 DEBUG [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, e4a29c7853f4c649e9db75dd1eab3fe5=hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5., bac5381289dc4350cf863d49cca42390=hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390.} 2023-07-12 20:18:45,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5. after waiting 0 ms 2023-07-12 20:18:45,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5. 2023-07-12 20:18:45,018 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 20:18:45,019 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 20:18:45,019 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 20:18:45,019 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 20:18:45,019 INFO [RS:2;jenkins-hbase4:43827] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 20:18:45,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e4a29c7853f4c649e9db75dd1eab3fe5 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-12 20:18:45,019 INFO [RS:2;jenkins-hbase4:43827] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 20:18:45,019 INFO [RS:2;jenkins-hbase4:43827] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 20:18:45,019 INFO [RS:2;jenkins-hbase4:43827] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43827,1689193121229 2023-07-12 20:18:45,019 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 20:18:45,019 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 20:18:45,019 INFO [RS:1;jenkins-hbase4:38407] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 20:18:45,019 INFO [RS:1;jenkins-hbase4:38407] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 20:18:45,019 INFO [RS:1;jenkins-hbase4:38407] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38407,1689193120989 2023-07-12 20:18:45,019 DEBUG [RS:1;jenkins-hbase4:38407] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x22d656d3 to 127.0.0.1:58245 2023-07-12 20:18:45,019 DEBUG [RS:1;jenkins-hbase4:38407] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:45,020 INFO [RS:1;jenkins-hbase4:38407] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38407,1689193120989; all regions closed. 2023-07-12 20:18:45,019 DEBUG [RS:2;jenkins-hbase4:43827] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x74c48c7f to 127.0.0.1:58245 2023-07-12 20:18:45,019 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-12 20:18:45,020 DEBUG [RS:2;jenkins-hbase4:43827] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:45,020 INFO [RS:2;jenkins-hbase4:43827] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43827,1689193121229; all regions closed. 2023-07-12 20:18:45,020 DEBUG [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(1504): Waiting on 1588230740, bac5381289dc4350cf863d49cca42390, e4a29c7853f4c649e9db75dd1eab3fe5 2023-07-12 20:18:45,032 DEBUG [RS:2;jenkins-hbase4:43827] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/oldWALs 2023-07-12 20:18:45,032 INFO [RS:2;jenkins-hbase4:43827] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43827%2C1689193121229:(num 1689193121885) 2023-07-12 20:18:45,032 DEBUG [RS:2;jenkins-hbase4:43827] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:45,032 INFO [RS:2;jenkins-hbase4:43827] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:45,035 INFO [RS:2;jenkins-hbase4:43827] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 20:18:45,035 INFO [RS:2;jenkins-hbase4:43827] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 20:18:45,035 INFO [RS:2;jenkins-hbase4:43827] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 20:18:45,035 INFO [RS:2;jenkins-hbase4:43827] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 20:18:45,035 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 20:18:45,036 INFO [RS:2;jenkins-hbase4:43827] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43827 2023-07-12 20:18:45,036 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:45,036 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:45,043 DEBUG [RS:3;jenkins-hbase4:41921] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/oldWALs 2023-07-12 20:18:45,043 INFO [RS:3;jenkins-hbase4:41921] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41921%2C1689193122677:(num 1689193123000) 2023-07-12 20:18:45,043 DEBUG [RS:3;jenkins-hbase4:41921] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:45,043 INFO [RS:3;jenkins-hbase4:41921] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:45,043 INFO [RS:3;jenkins-hbase4:41921] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 20:18:45,043 INFO [RS:3;jenkins-hbase4:41921] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 20:18:45,043 INFO [RS:3;jenkins-hbase4:41921] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 20:18:45,043 INFO [RS:3;jenkins-hbase4:41921] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 20:18:45,043 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 20:18:45,045 INFO [RS:3;jenkins-hbase4:41921] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41921 2023-07-12 20:18:45,045 DEBUG [RS:1;jenkins-hbase4:38407] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/oldWALs 2023-07-12 20:18:45,045 INFO [RS:1;jenkins-hbase4:38407] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38407%2C1689193120989:(num 1689193121885) 2023-07-12 20:18:45,045 DEBUG [RS:1;jenkins-hbase4:38407] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:45,045 INFO [RS:1;jenkins-hbase4:38407] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:45,046 INFO [RS:1;jenkins-hbase4:38407] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 20:18:45,046 INFO [RS:1;jenkins-hbase4:38407] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 20:18:45,046 INFO [RS:1;jenkins-hbase4:38407] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 20:18:45,046 INFO [RS:1;jenkins-hbase4:38407] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 20:18:45,046 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 20:18:45,047 INFO [RS:1;jenkins-hbase4:38407] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38407 2023-07-12 20:18:45,065 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:45,073 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/namespace/e4a29c7853f4c649e9db75dd1eab3fe5/.tmp/info/9eee54c5257c418387ca1e89400867b2 2023-07-12 20:18:45,078 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:45,082 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/.tmp/info/84eaeb49c4e240e193b1e59f317c4db7 2023-07-12 20:18:45,082 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9eee54c5257c418387ca1e89400867b2 2023-07-12 20:18:45,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/namespace/e4a29c7853f4c649e9db75dd1eab3fe5/.tmp/info/9eee54c5257c418387ca1e89400867b2 as hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/namespace/e4a29c7853f4c649e9db75dd1eab3fe5/info/9eee54c5257c418387ca1e89400867b2 2023-07-12 20:18:45,091 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 84eaeb49c4e240e193b1e59f317c4db7 2023-07-12 20:18:45,092 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9eee54c5257c418387ca1e89400867b2 2023-07-12 20:18:45,092 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/namespace/e4a29c7853f4c649e9db75dd1eab3fe5/info/9eee54c5257c418387ca1e89400867b2, entries=3, sequenceid=9, filesize=5.0 K 2023-07-12 20:18:45,093 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for e4a29c7853f4c649e9db75dd1eab3fe5 in 74ms, sequenceid=9, compaction requested=false 2023-07-12 20:18:45,127 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38407,1689193120989 2023-07-12 20:18:45,127 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38407,1689193120989 2023-07-12 20:18:45,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/namespace/e4a29c7853f4c649e9db75dd1eab3fe5/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-12 20:18:45,128 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38407,1689193120989 2023-07-12 20:18:45,128 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:45,127 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:45,128 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43827,1689193121229 2023-07-12 20:18:45,128 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:45,128 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43827,1689193121229 2023-07-12 20:18:45,127 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41921-0x1015b30065e000b, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38407,1689193120989 2023-07-12 20:18:45,128 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41921-0x1015b30065e000b, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:45,128 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41921-0x1015b30065e000b, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43827,1689193121229 2023-07-12 20:18:45,127 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:45,128 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43827,1689193121229 2023-07-12 20:18:45,128 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41921,1689193122677 2023-07-12 20:18:45,128 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5. 2023-07-12 20:18:45,128 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41921,1689193122677 2023-07-12 20:18:45,128 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41921-0x1015b30065e000b, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41921,1689193122677 2023-07-12 20:18:45,128 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43827,1689193121229] 2023-07-12 20:18:45,129 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43827,1689193121229; numProcessing=1 2023-07-12 20:18:45,128 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e4a29c7853f4c649e9db75dd1eab3fe5: 2023-07-12 20:18:45,129 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689193122167.e4a29c7853f4c649e9db75dd1eab3fe5. 2023-07-12 20:18:45,129 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bac5381289dc4350cf863d49cca42390, disabling compactions & flushes 2023-07-12 20:18:45,129 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390. 2023-07-12 20:18:45,129 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390. 2023-07-12 20:18:45,129 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390. after waiting 0 ms 2023-07-12 20:18:45,129 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390. 2023-07-12 20:18:45,129 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing bac5381289dc4350cf863d49cca42390 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-12 20:18:45,129 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41921,1689193122677 2023-07-12 20:18:45,132 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43827,1689193121229 already deleted, retry=false 2023-07-12 20:18:45,132 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43827,1689193121229 expired; onlineServers=3 2023-07-12 20:18:45,132 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38407,1689193120989] 2023-07-12 20:18:45,132 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38407,1689193120989; numProcessing=2 2023-07-12 20:18:45,133 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38407,1689193120989 already deleted, retry=false 2023-07-12 20:18:45,133 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38407,1689193120989 expired; onlineServers=2 2023-07-12 20:18:45,133 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41921,1689193122677] 2023-07-12 20:18:45,134 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41921,1689193122677; numProcessing=3 2023-07-12 20:18:45,135 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41921,1689193122677 already deleted, retry=false 2023-07-12 20:18:45,135 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41921,1689193122677 expired; onlineServers=1 2023-07-12 20:18:45,163 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/.tmp/rep_barrier/58274facbdbf4e9aa26ce7ab50004aa8 2023-07-12 20:18:45,171 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 58274facbdbf4e9aa26ce7ab50004aa8 2023-07-12 20:18:45,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/rsgroup/bac5381289dc4350cf863d49cca42390/.tmp/m/d90722b570264250990016543b1ae706 2023-07-12 20:18:45,193 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d90722b570264250990016543b1ae706 2023-07-12 20:18:45,194 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/rsgroup/bac5381289dc4350cf863d49cca42390/.tmp/m/d90722b570264250990016543b1ae706 as hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/rsgroup/bac5381289dc4350cf863d49cca42390/m/d90722b570264250990016543b1ae706 2023-07-12 20:18:45,199 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d90722b570264250990016543b1ae706 2023-07-12 20:18:45,200 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/rsgroup/bac5381289dc4350cf863d49cca42390/m/d90722b570264250990016543b1ae706, entries=12, sequenceid=29, filesize=5.4 K 2023-07-12 20:18:45,201 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for bac5381289dc4350cf863d49cca42390 in 72ms, sequenceid=29, compaction requested=false 2023-07-12 20:18:45,215 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/.tmp/table/53c77307518d4b458b058ea28c2b6fc4 2023-07-12 20:18:45,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/rsgroup/bac5381289dc4350cf863d49cca42390/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-12 20:18:45,216 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 20:18:45,216 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390. 2023-07-12 20:18:45,216 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bac5381289dc4350cf863d49cca42390: 2023-07-12 20:18:45,216 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689193122237.bac5381289dc4350cf863d49cca42390. 2023-07-12 20:18:45,220 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 53c77307518d4b458b058ea28c2b6fc4 2023-07-12 20:18:45,221 DEBUG [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-12 20:18:45,221 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/.tmp/info/84eaeb49c4e240e193b1e59f317c4db7 as hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/info/84eaeb49c4e240e193b1e59f317c4db7 2023-07-12 20:18:45,226 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 84eaeb49c4e240e193b1e59f317c4db7 2023-07-12 20:18:45,226 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/info/84eaeb49c4e240e193b1e59f317c4db7, entries=22, sequenceid=26, filesize=7.3 K 2023-07-12 20:18:45,227 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/.tmp/rep_barrier/58274facbdbf4e9aa26ce7ab50004aa8 as hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/rep_barrier/58274facbdbf4e9aa26ce7ab50004aa8 2023-07-12 20:18:45,231 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 58274facbdbf4e9aa26ce7ab50004aa8 2023-07-12 20:18:45,231 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/rep_barrier/58274facbdbf4e9aa26ce7ab50004aa8, entries=1, sequenceid=26, filesize=4.9 K 2023-07-12 20:18:45,232 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/.tmp/table/53c77307518d4b458b058ea28c2b6fc4 as hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/table/53c77307518d4b458b058ea28c2b6fc4 2023-07-12 20:18:45,236 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41921-0x1015b30065e000b, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:45,236 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41921-0x1015b30065e000b, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:45,236 INFO [RS:3;jenkins-hbase4:41921] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41921,1689193122677; zookeeper connection closed. 2023-07-12 20:18:45,237 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6e77102b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6e77102b 2023-07-12 20:18:45,237 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 53c77307518d4b458b058ea28c2b6fc4 2023-07-12 20:18:45,237 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/table/53c77307518d4b458b058ea28c2b6fc4, entries=6, sequenceid=26, filesize=5.1 K 2023-07-12 20:18:45,238 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 219ms, sequenceid=26, compaction requested=false 2023-07-12 20:18:45,248 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-12 20:18:45,249 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 20:18:45,249 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 20:18:45,250 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 20:18:45,250 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 20:18:45,336 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:45,336 INFO [RS:1;jenkins-hbase4:38407] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38407,1689193120989; zookeeper connection closed. 2023-07-12 20:18:45,336 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:38407-0x1015b30065e0002, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:45,337 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2082507e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2082507e 2023-07-12 20:18:45,421 INFO [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46531,1689193120819; all regions closed. 2023-07-12 20:18:45,427 DEBUG [RS:0;jenkins-hbase4:46531] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/oldWALs 2023-07-12 20:18:45,427 INFO [RS:0;jenkins-hbase4:46531] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46531%2C1689193120819.meta:.meta(num 1689193122105) 2023-07-12 20:18:45,434 DEBUG [RS:0;jenkins-hbase4:46531] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/oldWALs 2023-07-12 20:18:45,434 INFO [RS:0;jenkins-hbase4:46531] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46531%2C1689193120819:(num 1689193121894) 2023-07-12 20:18:45,434 DEBUG [RS:0;jenkins-hbase4:46531] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:45,434 INFO [RS:0;jenkins-hbase4:46531] regionserver.LeaseManager(133): Closed leases 2023-07-12 20:18:45,434 INFO [RS:0;jenkins-hbase4:46531] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 20:18:45,434 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 20:18:45,436 INFO [RS:0;jenkins-hbase4:46531] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46531 2023-07-12 20:18:45,438 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 20:18:45,438 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46531,1689193120819 2023-07-12 20:18:45,438 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46531,1689193120819] 2023-07-12 20:18:45,438 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46531,1689193120819; numProcessing=4 2023-07-12 20:18:45,440 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46531,1689193120819 already deleted, retry=false 2023-07-12 20:18:45,440 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46531,1689193120819 expired; onlineServers=0 2023-07-12 20:18:45,440 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40547,1689193120644' ***** 2023-07-12 20:18:45,440 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 20:18:45,441 DEBUG [M:0;jenkins-hbase4:40547] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@447234f4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-12 20:18:45,441 INFO [M:0;jenkins-hbase4:40547] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 20:18:45,443 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 20:18:45,443 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 20:18:45,443 INFO [M:0;jenkins-hbase4:40547] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@78273dc3{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-12 20:18:45,443 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 20:18:45,444 INFO [M:0;jenkins-hbase4:40547] server.AbstractConnector(383): Stopped ServerConnector@79468c31{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 20:18:45,444 INFO [M:0;jenkins-hbase4:40547] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 20:18:45,444 INFO [M:0;jenkins-hbase4:40547] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7793b15a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-12 20:18:45,445 INFO [M:0;jenkins-hbase4:40547] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@63cb53b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/hadoop.log.dir/,STOPPED} 2023-07-12 20:18:45,445 INFO [M:0;jenkins-hbase4:40547] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40547,1689193120644 2023-07-12 20:18:45,445 INFO [M:0;jenkins-hbase4:40547] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40547,1689193120644; all regions closed. 2023-07-12 20:18:45,445 DEBUG [M:0;jenkins-hbase4:40547] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 20:18:45,445 INFO [M:0;jenkins-hbase4:40547] master.HMaster(1491): Stopping master jetty server 2023-07-12 20:18:45,446 INFO [M:0;jenkins-hbase4:40547] server.AbstractConnector(383): Stopped ServerConnector@1abdfc99{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 20:18:45,447 DEBUG [M:0;jenkins-hbase4:40547] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 20:18:45,447 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 20:18:45,447 DEBUG [M:0;jenkins-hbase4:40547] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 20:18:45,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689193121667] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689193121667,5,FailOnTimeoutGroup] 2023-07-12 20:18:45,447 INFO [M:0;jenkins-hbase4:40547] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 20:18:45,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689193121668] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689193121668,5,FailOnTimeoutGroup] 2023-07-12 20:18:45,447 INFO [M:0;jenkins-hbase4:40547] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 20:18:45,447 INFO [M:0;jenkins-hbase4:40547] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-12 20:18:45,447 DEBUG [M:0;jenkins-hbase4:40547] master.HMaster(1512): Stopping service threads 2023-07-12 20:18:45,447 INFO [M:0;jenkins-hbase4:40547] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 20:18:45,447 ERROR [M:0;jenkins-hbase4:40547] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-12 20:18:45,448 INFO [M:0;jenkins-hbase4:40547] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 20:18:45,448 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 20:18:45,448 DEBUG [M:0;jenkins-hbase4:40547] zookeeper.ZKUtil(398): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 20:18:45,448 WARN [M:0;jenkins-hbase4:40547] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 20:18:45,448 INFO [M:0;jenkins-hbase4:40547] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 20:18:45,448 INFO [M:0;jenkins-hbase4:40547] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 20:18:45,448 DEBUG [M:0;jenkins-hbase4:40547] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 20:18:45,448 INFO [M:0;jenkins-hbase4:40547] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:45,448 DEBUG [M:0;jenkins-hbase4:40547] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:45,449 DEBUG [M:0;jenkins-hbase4:40547] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 20:18:45,449 DEBUG [M:0;jenkins-hbase4:40547] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:45,449 INFO [M:0;jenkins-hbase4:40547] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.21 KB heapSize=90.66 KB 2023-07-12 20:18:45,460 INFO [M:0;jenkins-hbase4:40547] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.21 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/a9a7f74dc91f427f99b882927d8de33b 2023-07-12 20:18:45,465 DEBUG [M:0;jenkins-hbase4:40547] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/a9a7f74dc91f427f99b882927d8de33b as hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/a9a7f74dc91f427f99b882927d8de33b 2023-07-12 20:18:45,470 INFO [M:0;jenkins-hbase4:40547] regionserver.HStore(1080): Added hdfs://localhost:34547/user/jenkins/test-data/ff0a15b9-9523-fc1a-1d0c-41e8122dbf0f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/a9a7f74dc91f427f99b882927d8de33b, entries=22, sequenceid=175, filesize=11.1 K 2023-07-12 20:18:45,471 INFO [M:0;jenkins-hbase4:40547] regionserver.HRegion(2948): Finished flush of dataSize ~76.21 KB/78041, heapSize ~90.64 KB/92816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 22ms, sequenceid=175, compaction requested=false 2023-07-12 20:18:45,473 INFO [M:0;jenkins-hbase4:40547] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 20:18:45,473 DEBUG [M:0;jenkins-hbase4:40547] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 20:18:45,478 INFO [M:0;jenkins-hbase4:40547] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 20:18:45,478 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 20:18:45,478 INFO [M:0;jenkins-hbase4:40547] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40547 2023-07-12 20:18:45,480 DEBUG [M:0;jenkins-hbase4:40547] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,40547,1689193120644 already deleted, retry=false 2023-07-12 20:18:45,598 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:45,598 INFO [M:0;jenkins-hbase4:40547] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40547,1689193120644; zookeeper connection closed. 2023-07-12 20:18:45,598 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1015b30065e0000, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:45,698 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:45,698 INFO [RS:0;jenkins-hbase4:46531] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46531,1689193120819; zookeeper connection closed. 2023-07-12 20:18:45,698 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:46531-0x1015b30065e0001, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:45,698 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3471f6ee] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3471f6ee 2023-07-12 20:18:45,798 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:45,798 INFO [RS:2;jenkins-hbase4:43827] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43827,1689193121229; zookeeper connection closed. 2023-07-12 20:18:45,798 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:43827-0x1015b30065e0003, quorum=127.0.0.1:58245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 20:18:45,799 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@132bc9fa] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@132bc9fa 2023-07-12 20:18:45,799 INFO [Listener at localhost/33473] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-12 20:18:45,799 WARN [Listener at localhost/33473] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 20:18:45,803 INFO [Listener at localhost/33473] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 20:18:45,906 WARN [BP-1220816525-172.31.14.131-1689193119866 heartbeating to localhost/127.0.0.1:34547] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 20:18:45,907 WARN [BP-1220816525-172.31.14.131-1689193119866 heartbeating to localhost/127.0.0.1:34547] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1220816525-172.31.14.131-1689193119866 (Datanode Uuid eac2f004-2fd8-4962-a16e-f56ab832563c) service to localhost/127.0.0.1:34547 2023-07-12 20:18:45,907 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data5/current/BP-1220816525-172.31.14.131-1689193119866] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:45,908 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data6/current/BP-1220816525-172.31.14.131-1689193119866] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:45,909 WARN [Listener at localhost/33473] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 20:18:45,913 INFO [Listener at localhost/33473] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 20:18:46,017 WARN [BP-1220816525-172.31.14.131-1689193119866 heartbeating to localhost/127.0.0.1:34547] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 20:18:46,017 WARN [BP-1220816525-172.31.14.131-1689193119866 heartbeating to localhost/127.0.0.1:34547] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1220816525-172.31.14.131-1689193119866 (Datanode Uuid 94a7f703-54e3-41fa-88b8-2ab27578b357) service to localhost/127.0.0.1:34547 2023-07-12 20:18:46,018 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data3/current/BP-1220816525-172.31.14.131-1689193119866] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:46,018 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data4/current/BP-1220816525-172.31.14.131-1689193119866] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:46,020 WARN [Listener at localhost/33473] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 20:18:46,023 INFO [Listener at localhost/33473] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 20:18:46,127 WARN [BP-1220816525-172.31.14.131-1689193119866 heartbeating to localhost/127.0.0.1:34547] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 20:18:46,127 WARN [BP-1220816525-172.31.14.131-1689193119866 heartbeating to localhost/127.0.0.1:34547] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1220816525-172.31.14.131-1689193119866 (Datanode Uuid 23cafc6d-1524-40ce-8464-43397641102c) service to localhost/127.0.0.1:34547 2023-07-12 20:18:46,127 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data1/current/BP-1220816525-172.31.14.131-1689193119866] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:46,128 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/584a05b3-5465-6e2a-da19-3ad7a3ee7362/cluster_30e33c19-adf1-1e86-fa34-9cf5243e47c3/dfs/data/data2/current/BP-1220816525-172.31.14.131-1689193119866] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 20:18:46,140 INFO [Listener at localhost/33473] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 20:18:46,261 INFO [Listener at localhost/33473] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 20:18:46,301 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(1293): Minicluster is down