2023-07-21 05:14:03,599 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645 2023-07-21 05:14:03,621 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-21 05:14:03,647 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 05:14:03,648 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/cluster_4e79e38a-666f-bc42-a998-45a19ecc7c64, deleteOnExit=true 2023-07-21 05:14:03,648 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 05:14:03,648 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/test.cache.data in system properties and HBase conf 2023-07-21 05:14:03,650 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 05:14:03,650 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/hadoop.log.dir in system properties and HBase conf 2023-07-21 05:14:03,651 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 05:14:03,651 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 05:14:03,651 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 05:14:03,812 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-21 05:14:04,259 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 05:14:04,264 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 05:14:04,264 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 05:14:04,264 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 05:14:04,265 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 05:14:04,265 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 05:14:04,265 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 05:14:04,265 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 05:14:04,266 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 05:14:04,266 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 05:14:04,266 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/nfs.dump.dir in system properties and HBase conf 2023-07-21 05:14:04,266 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/java.io.tmpdir in system properties and HBase conf 2023-07-21 05:14:04,267 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 05:14:04,267 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 05:14:04,267 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 05:14:04,851 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 05:14:04,856 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 05:14:05,180 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-21 05:14:05,400 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-21 05:14:05,421 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 05:14:05,464 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 05:14:05,500 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/java.io.tmpdir/Jetty_localhost_44691_hdfs____sqg83c/webapp 2023-07-21 05:14:05,684 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44691 2023-07-21 05:14:05,738 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 05:14:05,738 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 05:14:06,300 WARN [Listener at localhost/38517] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 05:14:06,399 WARN [Listener at localhost/38517] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 05:14:06,421 WARN [Listener at localhost/38517] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 05:14:06,428 INFO [Listener at localhost/38517] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 05:14:06,436 INFO [Listener at localhost/38517] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/java.io.tmpdir/Jetty_localhost_40113_datanode____f5j7ni/webapp 2023-07-21 05:14:06,567 INFO [Listener at localhost/38517] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40113 2023-07-21 05:14:07,066 WARN [Listener at localhost/46191] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 05:14:07,125 WARN [Listener at localhost/46191] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 05:14:07,131 WARN [Listener at localhost/46191] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 05:14:07,133 INFO [Listener at localhost/46191] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 05:14:07,139 INFO [Listener at localhost/46191] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/java.io.tmpdir/Jetty_localhost_33345_datanode____.hhb91h/webapp 2023-07-21 05:14:07,252 INFO [Listener at localhost/46191] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33345 2023-07-21 05:14:07,264 WARN [Listener at localhost/43629] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 05:14:07,290 WARN [Listener at localhost/43629] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 05:14:07,294 WARN [Listener at localhost/43629] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 05:14:07,296 INFO [Listener at localhost/43629] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 05:14:07,306 INFO [Listener at localhost/43629] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/java.io.tmpdir/Jetty_localhost_36999_datanode____.pgof2b/webapp 2023-07-21 05:14:07,429 INFO [Listener at localhost/43629] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36999 2023-07-21 05:14:07,451 WARN [Listener at localhost/34619] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 05:14:07,785 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x680d1958e9f67e5b: Processing first storage report for DS-7c451091-9046-4b1d-8a3f-4d703150a8ab from datanode 4efe0456-d9e9-4901-b03c-557bd4813d3f 2023-07-21 05:14:07,787 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x680d1958e9f67e5b: from storage DS-7c451091-9046-4b1d-8a3f-4d703150a8ab node DatanodeRegistration(127.0.0.1:44623, datanodeUuid=4efe0456-d9e9-4901-b03c-557bd4813d3f, infoPort=36459, infoSecurePort=0, ipcPort=43629, storageInfo=lv=-57;cid=testClusterID;nsid=1106699394;c=1689916444933), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-21 05:14:07,788 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf42df696ce3645c6: Processing first storage report for DS-b78fc2a6-5cc1-456d-a1aa-9dc4e0ee367f from datanode f31f3cc6-e5a8-454e-9c55-b614b45314e8 2023-07-21 05:14:07,788 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf42df696ce3645c6: from storage DS-b78fc2a6-5cc1-456d-a1aa-9dc4e0ee367f node DatanodeRegistration(127.0.0.1:45983, datanodeUuid=f31f3cc6-e5a8-454e-9c55-b614b45314e8, infoPort=33529, infoSecurePort=0, ipcPort=34619, storageInfo=lv=-57;cid=testClusterID;nsid=1106699394;c=1689916444933), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 05:14:07,788 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x20766206bb944223: Processing first storage report for DS-bcb8a3a0-04d5-494e-8dff-602b9a3744dc from datanode a790b6aa-49c2-4d0b-9db3-64fba725a48a 2023-07-21 05:14:07,788 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x20766206bb944223: from storage DS-bcb8a3a0-04d5-494e-8dff-602b9a3744dc node DatanodeRegistration(127.0.0.1:38349, datanodeUuid=a790b6aa-49c2-4d0b-9db3-64fba725a48a, infoPort=42737, infoSecurePort=0, ipcPort=46191, storageInfo=lv=-57;cid=testClusterID;nsid=1106699394;c=1689916444933), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 05:14:07,788 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x680d1958e9f67e5b: Processing first storage report for DS-b2c87276-2910-4a6c-8fc6-ee20a89200f0 from datanode 4efe0456-d9e9-4901-b03c-557bd4813d3f 2023-07-21 05:14:07,789 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x680d1958e9f67e5b: from storage DS-b2c87276-2910-4a6c-8fc6-ee20a89200f0 node DatanodeRegistration(127.0.0.1:44623, datanodeUuid=4efe0456-d9e9-4901-b03c-557bd4813d3f, infoPort=36459, infoSecurePort=0, ipcPort=43629, storageInfo=lv=-57;cid=testClusterID;nsid=1106699394;c=1689916444933), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 05:14:07,789 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf42df696ce3645c6: Processing first storage report for DS-53f45b00-93b4-495c-9a4a-3d869f35f05b from datanode f31f3cc6-e5a8-454e-9c55-b614b45314e8 2023-07-21 05:14:07,789 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf42df696ce3645c6: from storage DS-53f45b00-93b4-495c-9a4a-3d869f35f05b node DatanodeRegistration(127.0.0.1:45983, datanodeUuid=f31f3cc6-e5a8-454e-9c55-b614b45314e8, infoPort=33529, infoSecurePort=0, ipcPort=34619, storageInfo=lv=-57;cid=testClusterID;nsid=1106699394;c=1689916444933), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 05:14:07,789 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x20766206bb944223: Processing first storage report for DS-e4d287a2-a8ca-430a-80c7-28e7507f8cf5 from datanode a790b6aa-49c2-4d0b-9db3-64fba725a48a 2023-07-21 05:14:07,789 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x20766206bb944223: from storage DS-e4d287a2-a8ca-430a-80c7-28e7507f8cf5 node DatanodeRegistration(127.0.0.1:38349, datanodeUuid=a790b6aa-49c2-4d0b-9db3-64fba725a48a, infoPort=42737, infoSecurePort=0, ipcPort=46191, storageInfo=lv=-57;cid=testClusterID;nsid=1106699394;c=1689916444933), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 05:14:08,006 DEBUG [Listener at localhost/34619] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645 2023-07-21 05:14:08,108 INFO [Listener at localhost/34619] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/cluster_4e79e38a-666f-bc42-a998-45a19ecc7c64/zookeeper_0, clientPort=55013, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/cluster_4e79e38a-666f-bc42-a998-45a19ecc7c64/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/cluster_4e79e38a-666f-bc42-a998-45a19ecc7c64/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 05:14:08,127 INFO [Listener at localhost/34619] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=55013 2023-07-21 05:14:08,138 INFO [Listener at localhost/34619] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:08,141 INFO [Listener at localhost/34619] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:08,855 INFO [Listener at localhost/34619] util.FSUtils(471): Created version file at hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf with version=8 2023-07-21 05:14:08,855 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/hbase-staging 2023-07-21 05:14:08,867 DEBUG [Listener at localhost/34619] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 05:14:08,867 DEBUG [Listener at localhost/34619] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 05:14:08,867 DEBUG [Listener at localhost/34619] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 05:14:08,867 DEBUG [Listener at localhost/34619] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 05:14:09,260 INFO [Listener at localhost/34619] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-21 05:14:09,955 INFO [Listener at localhost/34619] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 05:14:10,007 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:10,007 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:10,008 INFO [Listener at localhost/34619] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 05:14:10,008 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:10,008 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 05:14:10,182 INFO [Listener at localhost/34619] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 05:14:10,284 DEBUG [Listener at localhost/34619] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-21 05:14:10,410 INFO [Listener at localhost/34619] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42467 2023-07-21 05:14:10,426 INFO [Listener at localhost/34619] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:10,428 INFO [Listener at localhost/34619] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:10,456 INFO [Listener at localhost/34619] zookeeper.RecoverableZooKeeper(93): Process identifier=master:42467 connecting to ZooKeeper ensemble=127.0.0.1:55013 2023-07-21 05:14:10,512 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:424670x0, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 05:14:10,517 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:42467-0x101864d20580000 connected 2023-07-21 05:14:10,567 DEBUG [Listener at localhost/34619] zookeeper.ZKUtil(164): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 05:14:10,568 DEBUG [Listener at localhost/34619] zookeeper.ZKUtil(164): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:10,573 DEBUG [Listener at localhost/34619] zookeeper.ZKUtil(164): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 05:14:10,594 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42467 2023-07-21 05:14:10,597 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42467 2023-07-21 05:14:10,599 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42467 2023-07-21 05:14:10,602 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42467 2023-07-21 05:14:10,603 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42467 2023-07-21 05:14:10,653 INFO [Listener at localhost/34619] log.Log(170): Logging initialized @7775ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-21 05:14:10,816 INFO [Listener at localhost/34619] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 05:14:10,817 INFO [Listener at localhost/34619] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 05:14:10,817 INFO [Listener at localhost/34619] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 05:14:10,820 INFO [Listener at localhost/34619] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 05:14:10,820 INFO [Listener at localhost/34619] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 05:14:10,820 INFO [Listener at localhost/34619] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 05:14:10,825 INFO [Listener at localhost/34619] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 05:14:10,896 INFO [Listener at localhost/34619] http.HttpServer(1146): Jetty bound to port 43335 2023-07-21 05:14:10,898 INFO [Listener at localhost/34619] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 05:14:10,934 INFO [Listener at localhost/34619] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:10,937 INFO [Listener at localhost/34619] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@32d90619{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/hadoop.log.dir/,AVAILABLE} 2023-07-21 05:14:10,938 INFO [Listener at localhost/34619] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:10,938 INFO [Listener at localhost/34619] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3e41c305{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-21 05:14:11,011 INFO [Listener at localhost/34619] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 05:14:11,026 INFO [Listener at localhost/34619] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 05:14:11,027 INFO [Listener at localhost/34619] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 05:14:11,029 INFO [Listener at localhost/34619] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 05:14:11,037 INFO [Listener at localhost/34619] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:11,065 INFO [Listener at localhost/34619] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@27ce108a{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-21 05:14:11,078 INFO [Listener at localhost/34619] server.AbstractConnector(333): Started ServerConnector@93a89c0{HTTP/1.1, (http/1.1)}{0.0.0.0:43335} 2023-07-21 05:14:11,078 INFO [Listener at localhost/34619] server.Server(415): Started @8201ms 2023-07-21 05:14:11,082 INFO [Listener at localhost/34619] master.HMaster(444): hbase.rootdir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf, hbase.cluster.distributed=false 2023-07-21 05:14:11,167 INFO [Listener at localhost/34619] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 05:14:11,167 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:11,168 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:11,168 INFO [Listener at localhost/34619] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 05:14:11,168 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:11,168 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 05:14:11,176 INFO [Listener at localhost/34619] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 05:14:11,179 INFO [Listener at localhost/34619] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42315 2023-07-21 05:14:11,181 INFO [Listener at localhost/34619] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 05:14:11,189 DEBUG [Listener at localhost/34619] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 05:14:11,190 INFO [Listener at localhost/34619] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:11,193 INFO [Listener at localhost/34619] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:11,195 INFO [Listener at localhost/34619] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42315 connecting to ZooKeeper ensemble=127.0.0.1:55013 2023-07-21 05:14:11,199 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:423150x0, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 05:14:11,200 DEBUG [Listener at localhost/34619] zookeeper.ZKUtil(164): regionserver:423150x0, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 05:14:11,208 DEBUG [Listener at localhost/34619] zookeeper.ZKUtil(164): regionserver:423150x0, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:11,208 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42315-0x101864d20580001 connected 2023-07-21 05:14:11,209 DEBUG [Listener at localhost/34619] zookeeper.ZKUtil(164): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 05:14:11,211 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42315 2023-07-21 05:14:11,214 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42315 2023-07-21 05:14:11,215 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42315 2023-07-21 05:14:11,221 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42315 2023-07-21 05:14:11,221 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42315 2023-07-21 05:14:11,224 INFO [Listener at localhost/34619] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 05:14:11,224 INFO [Listener at localhost/34619] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 05:14:11,224 INFO [Listener at localhost/34619] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 05:14:11,225 INFO [Listener at localhost/34619] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 05:14:11,225 INFO [Listener at localhost/34619] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 05:14:11,225 INFO [Listener at localhost/34619] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 05:14:11,226 INFO [Listener at localhost/34619] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 05:14:11,228 INFO [Listener at localhost/34619] http.HttpServer(1146): Jetty bound to port 35117 2023-07-21 05:14:11,228 INFO [Listener at localhost/34619] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 05:14:11,243 INFO [Listener at localhost/34619] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:11,243 INFO [Listener at localhost/34619] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4784b602{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/hadoop.log.dir/,AVAILABLE} 2023-07-21 05:14:11,244 INFO [Listener at localhost/34619] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:11,244 INFO [Listener at localhost/34619] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@623f7cf4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-21 05:14:11,257 INFO [Listener at localhost/34619] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 05:14:11,258 INFO [Listener at localhost/34619] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 05:14:11,259 INFO [Listener at localhost/34619] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 05:14:11,259 INFO [Listener at localhost/34619] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 05:14:11,261 INFO [Listener at localhost/34619] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:11,265 INFO [Listener at localhost/34619] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1364e664{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:11,266 INFO [Listener at localhost/34619] server.AbstractConnector(333): Started ServerConnector@6fc105c0{HTTP/1.1, (http/1.1)}{0.0.0.0:35117} 2023-07-21 05:14:11,266 INFO [Listener at localhost/34619] server.Server(415): Started @8389ms 2023-07-21 05:14:11,283 INFO [Listener at localhost/34619] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 05:14:11,284 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:11,284 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:11,285 INFO [Listener at localhost/34619] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 05:14:11,285 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:11,285 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 05:14:11,285 INFO [Listener at localhost/34619] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 05:14:11,288 INFO [Listener at localhost/34619] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42093 2023-07-21 05:14:11,288 INFO [Listener at localhost/34619] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 05:14:11,297 DEBUG [Listener at localhost/34619] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 05:14:11,298 INFO [Listener at localhost/34619] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:11,301 INFO [Listener at localhost/34619] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:11,304 INFO [Listener at localhost/34619] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42093 connecting to ZooKeeper ensemble=127.0.0.1:55013 2023-07-21 05:14:11,308 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:420930x0, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 05:14:11,309 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42093-0x101864d20580002 connected 2023-07-21 05:14:11,309 DEBUG [Listener at localhost/34619] zookeeper.ZKUtil(164): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 05:14:11,310 DEBUG [Listener at localhost/34619] zookeeper.ZKUtil(164): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:11,311 DEBUG [Listener at localhost/34619] zookeeper.ZKUtil(164): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 05:14:11,312 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42093 2023-07-21 05:14:11,312 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42093 2023-07-21 05:14:11,319 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42093 2023-07-21 05:14:11,323 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42093 2023-07-21 05:14:11,323 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42093 2023-07-21 05:14:11,326 INFO [Listener at localhost/34619] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 05:14:11,326 INFO [Listener at localhost/34619] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 05:14:11,326 INFO [Listener at localhost/34619] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 05:14:11,327 INFO [Listener at localhost/34619] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 05:14:11,327 INFO [Listener at localhost/34619] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 05:14:11,327 INFO [Listener at localhost/34619] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 05:14:11,327 INFO [Listener at localhost/34619] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 05:14:11,328 INFO [Listener at localhost/34619] http.HttpServer(1146): Jetty bound to port 35997 2023-07-21 05:14:11,328 INFO [Listener at localhost/34619] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 05:14:11,332 INFO [Listener at localhost/34619] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:11,333 INFO [Listener at localhost/34619] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1ab64a2f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/hadoop.log.dir/,AVAILABLE} 2023-07-21 05:14:11,333 INFO [Listener at localhost/34619] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:11,334 INFO [Listener at localhost/34619] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39d35c00{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-21 05:14:11,346 INFO [Listener at localhost/34619] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 05:14:11,347 INFO [Listener at localhost/34619] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 05:14:11,347 INFO [Listener at localhost/34619] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 05:14:11,347 INFO [Listener at localhost/34619] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 05:14:11,349 INFO [Listener at localhost/34619] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:11,350 INFO [Listener at localhost/34619] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2d178a1{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:11,351 INFO [Listener at localhost/34619] server.AbstractConnector(333): Started ServerConnector@4ccea9bd{HTTP/1.1, (http/1.1)}{0.0.0.0:35997} 2023-07-21 05:14:11,352 INFO [Listener at localhost/34619] server.Server(415): Started @8474ms 2023-07-21 05:14:11,367 INFO [Listener at localhost/34619] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 05:14:11,367 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:11,368 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:11,368 INFO [Listener at localhost/34619] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 05:14:11,368 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:11,368 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 05:14:11,368 INFO [Listener at localhost/34619] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 05:14:11,370 INFO [Listener at localhost/34619] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40677 2023-07-21 05:14:11,370 INFO [Listener at localhost/34619] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 05:14:11,373 DEBUG [Listener at localhost/34619] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 05:14:11,374 INFO [Listener at localhost/34619] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:11,376 INFO [Listener at localhost/34619] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:11,377 INFO [Listener at localhost/34619] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40677 connecting to ZooKeeper ensemble=127.0.0.1:55013 2023-07-21 05:14:11,381 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:406770x0, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 05:14:11,382 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40677-0x101864d20580003 connected 2023-07-21 05:14:11,382 DEBUG [Listener at localhost/34619] zookeeper.ZKUtil(164): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 05:14:11,383 DEBUG [Listener at localhost/34619] zookeeper.ZKUtil(164): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:11,383 DEBUG [Listener at localhost/34619] zookeeper.ZKUtil(164): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 05:14:11,384 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40677 2023-07-21 05:14:11,384 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40677 2023-07-21 05:14:11,384 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40677 2023-07-21 05:14:11,387 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40677 2023-07-21 05:14:11,387 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40677 2023-07-21 05:14:11,389 INFO [Listener at localhost/34619] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 05:14:11,389 INFO [Listener at localhost/34619] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 05:14:11,389 INFO [Listener at localhost/34619] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 05:14:11,390 INFO [Listener at localhost/34619] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 05:14:11,390 INFO [Listener at localhost/34619] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 05:14:11,390 INFO [Listener at localhost/34619] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 05:14:11,390 INFO [Listener at localhost/34619] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 05:14:11,391 INFO [Listener at localhost/34619] http.HttpServer(1146): Jetty bound to port 41103 2023-07-21 05:14:11,391 INFO [Listener at localhost/34619] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 05:14:11,392 INFO [Listener at localhost/34619] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:11,393 INFO [Listener at localhost/34619] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5a76cee2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/hadoop.log.dir/,AVAILABLE} 2023-07-21 05:14:11,393 INFO [Listener at localhost/34619] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:11,393 INFO [Listener at localhost/34619] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5e2294fa{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-21 05:14:11,402 INFO [Listener at localhost/34619] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 05:14:11,404 INFO [Listener at localhost/34619] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 05:14:11,404 INFO [Listener at localhost/34619] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 05:14:11,404 INFO [Listener at localhost/34619] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 05:14:11,405 INFO [Listener at localhost/34619] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:11,406 INFO [Listener at localhost/34619] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@36a7cf96{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:11,407 INFO [Listener at localhost/34619] server.AbstractConnector(333): Started ServerConnector@1266d143{HTTP/1.1, (http/1.1)}{0.0.0.0:41103} 2023-07-21 05:14:11,407 INFO [Listener at localhost/34619] server.Server(415): Started @8530ms 2023-07-21 05:14:11,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 05:14:11,417 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@69fc8b39{HTTP/1.1, (http/1.1)}{0.0.0.0:36061} 2023-07-21 05:14:11,418 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8541ms 2023-07-21 05:14:11,418 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,42467,1689916449058 2023-07-21 05:14:11,430 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 05:14:11,431 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,42467,1689916449058 2023-07-21 05:14:11,451 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 05:14:11,451 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 05:14:11,451 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 05:14:11,451 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 05:14:11,452 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:11,454 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 05:14:11,456 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,42467,1689916449058 from backup master directory 2023-07-21 05:14:11,456 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 05:14:11,460 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,42467,1689916449058 2023-07-21 05:14:11,460 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 05:14:11,461 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 05:14:11,461 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,42467,1689916449058 2023-07-21 05:14:11,466 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-21 05:14:11,468 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-21 05:14:11,595 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/hbase.id with ID: 5e5a5491-4a64-49c9-9fbd-7c0bc221024b 2023-07-21 05:14:11,636 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:11,653 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:11,717 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x546eadd7 to 127.0.0.1:55013 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:11,751 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@34032b76, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:11,780 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:11,782 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 05:14:11,804 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-21 05:14:11,804 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-21 05:14:11,806 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 05:14:11,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 05:14:11,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:11,853 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/MasterData/data/master/store-tmp 2023-07-21 05:14:11,906 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:11,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 05:14:11,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:11,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:11,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 05:14:11,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:11,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:11,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 05:14:11,909 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/MasterData/WALs/jenkins-hbase4.apache.org,42467,1689916449058 2023-07-21 05:14:11,936 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42467%2C1689916449058, suffix=, logDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/MasterData/WALs/jenkins-hbase4.apache.org,42467,1689916449058, archiveDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/MasterData/oldWALs, maxLogs=10 2023-07-21 05:14:12,005 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38349,DS-bcb8a3a0-04d5-494e-8dff-602b9a3744dc,DISK] 2023-07-21 05:14:12,005 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44623,DS-7c451091-9046-4b1d-8a3f-4d703150a8ab,DISK] 2023-07-21 05:14:12,005 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45983,DS-b78fc2a6-5cc1-456d-a1aa-9dc4e0ee367f,DISK] 2023-07-21 05:14:12,014 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 05:14:12,087 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/MasterData/WALs/jenkins-hbase4.apache.org,42467,1689916449058/jenkins-hbase4.apache.org%2C42467%2C1689916449058.1689916451949 2023-07-21 05:14:12,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44623,DS-7c451091-9046-4b1d-8a3f-4d703150a8ab,DISK], DatanodeInfoWithStorage[127.0.0.1:45983,DS-b78fc2a6-5cc1-456d-a1aa-9dc4e0ee367f,DISK], DatanodeInfoWithStorage[127.0.0.1:38349,DS-bcb8a3a0-04d5-494e-8dff-602b9a3744dc,DISK]] 2023-07-21 05:14:12,089 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:12,089 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:12,094 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:12,095 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:12,221 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:12,228 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 05:14:12,257 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 05:14:12,271 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:12,276 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:12,278 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:12,295 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:12,299 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:12,300 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10697524640, jitterRate=-0.0037153810262680054}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:12,300 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 05:14:12,301 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 05:14:12,329 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 05:14:12,329 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 05:14:12,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 05:14:12,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-21 05:14:12,373 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 38 msec 2023-07-21 05:14:12,373 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 05:14:12,401 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 05:14:12,407 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 05:14:12,415 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 05:14:12,420 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 05:14:12,426 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 05:14:12,429 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:12,430 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 05:14:12,431 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 05:14:12,446 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 05:14:12,453 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:12,453 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:12,453 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:12,453 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:12,453 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:12,454 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,42467,1689916449058, sessionid=0x101864d20580000, setting cluster-up flag (Was=false) 2023-07-21 05:14:12,478 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:12,487 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 05:14:12,489 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42467,1689916449058 2023-07-21 05:14:12,494 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:12,500 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 05:14:12,502 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42467,1689916449058 2023-07-21 05:14:12,505 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.hbase-snapshot/.tmp 2023-07-21 05:14:12,512 INFO [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(951): ClusterId : 5e5a5491-4a64-49c9-9fbd-7c0bc221024b 2023-07-21 05:14:12,512 INFO [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(951): ClusterId : 5e5a5491-4a64-49c9-9fbd-7c0bc221024b 2023-07-21 05:14:12,517 INFO [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(951): ClusterId : 5e5a5491-4a64-49c9-9fbd-7c0bc221024b 2023-07-21 05:14:12,523 DEBUG [RS:0;jenkins-hbase4:42315] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 05:14:12,523 DEBUG [RS:1;jenkins-hbase4:42093] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 05:14:12,523 DEBUG [RS:2;jenkins-hbase4:40677] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 05:14:12,530 DEBUG [RS:0;jenkins-hbase4:42315] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 05:14:12,530 DEBUG [RS:1;jenkins-hbase4:42093] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 05:14:12,530 DEBUG [RS:0;jenkins-hbase4:42315] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 05:14:12,531 DEBUG [RS:1;jenkins-hbase4:42093] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 05:14:12,531 DEBUG [RS:2;jenkins-hbase4:40677] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 05:14:12,531 DEBUG [RS:2;jenkins-hbase4:40677] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 05:14:12,534 DEBUG [RS:1;jenkins-hbase4:42093] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 05:14:12,534 DEBUG [RS:0;jenkins-hbase4:42315] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 05:14:12,534 DEBUG [RS:2;jenkins-hbase4:40677] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 05:14:12,536 DEBUG [RS:1;jenkins-hbase4:42093] zookeeper.ReadOnlyZKClient(139): Connect 0x71eee8f5 to 127.0.0.1:55013 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:12,537 DEBUG [RS:0;jenkins-hbase4:42315] zookeeper.ReadOnlyZKClient(139): Connect 0x097986d4 to 127.0.0.1:55013 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:12,537 DEBUG [RS:2;jenkins-hbase4:40677] zookeeper.ReadOnlyZKClient(139): Connect 0x617ec7c3 to 127.0.0.1:55013 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:12,548 DEBUG [RS:1;jenkins-hbase4:42093] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7f2908b8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:12,548 DEBUG [RS:0;jenkins-hbase4:42315] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7be698b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:12,549 DEBUG [RS:1;jenkins-hbase4:42093] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7b168385, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 05:14:12,549 DEBUG [RS:0;jenkins-hbase4:42315] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@381eda4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 05:14:12,552 DEBUG [RS:2;jenkins-hbase4:40677] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6fb9359c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:12,552 DEBUG [RS:2;jenkins-hbase4:40677] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6b86acb3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 05:14:12,583 DEBUG [RS:1;jenkins-hbase4:42093] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:42093 2023-07-21 05:14:12,585 DEBUG [RS:2;jenkins-hbase4:40677] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:40677 2023-07-21 05:14:12,586 DEBUG [RS:0;jenkins-hbase4:42315] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:42315 2023-07-21 05:14:12,590 INFO [RS:0;jenkins-hbase4:42315] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 05:14:12,591 INFO [RS:2;jenkins-hbase4:40677] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 05:14:12,591 INFO [RS:2;jenkins-hbase4:40677] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 05:14:12,591 INFO [RS:0;jenkins-hbase4:42315] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 05:14:12,590 INFO [RS:1;jenkins-hbase4:42093] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 05:14:12,591 DEBUG [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 05:14:12,591 DEBUG [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 05:14:12,591 INFO [RS:1;jenkins-hbase4:42093] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 05:14:12,592 DEBUG [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 05:14:12,595 INFO [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42467,1689916449058 with isa=jenkins-hbase4.apache.org/172.31.14.131:42093, startcode=1689916451283 2023-07-21 05:14:12,595 INFO [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42467,1689916449058 with isa=jenkins-hbase4.apache.org/172.31.14.131:42315, startcode=1689916451166 2023-07-21 05:14:12,595 INFO [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42467,1689916449058 with isa=jenkins-hbase4.apache.org/172.31.14.131:40677, startcode=1689916451367 2023-07-21 05:14:12,623 DEBUG [RS:2;jenkins-hbase4:40677] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 05:14:12,624 DEBUG [RS:0;jenkins-hbase4:42315] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 05:14:12,623 DEBUG [RS:1;jenkins-hbase4:42093] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 05:14:12,626 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 05:14:12,639 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 05:14:12,642 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42467,1689916449058] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 05:14:12,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 05:14:12,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 05:14:12,705 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34003, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 05:14:12,705 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42853, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 05:14:12,705 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54401, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 05:14:12,717 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:12,730 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:12,731 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:12,755 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 05:14:12,761 DEBUG [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 05:14:12,761 DEBUG [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 05:14:12,761 WARN [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 05:14:12,761 WARN [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 05:14:12,761 DEBUG [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 05:14:12,762 WARN [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 05:14:12,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 05:14:12,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 05:14:12,814 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 05:14:12,814 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 05:14:12,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 05:14:12,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 05:14:12,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 05:14:12,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 05:14:12,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-21 05:14:12,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:12,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 05:14:12,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:12,863 INFO [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42467,1689916449058 with isa=jenkins-hbase4.apache.org/172.31.14.131:40677, startcode=1689916451367 2023-07-21 05:14:12,875 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689916482875 2023-07-21 05:14:12,863 INFO [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42467,1689916449058 with isa=jenkins-hbase4.apache.org/172.31.14.131:42093, startcode=1689916451283 2023-07-21 05:14:12,863 INFO [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42467,1689916449058 with isa=jenkins-hbase4.apache.org/172.31.14.131:42315, startcode=1689916451166 2023-07-21 05:14:12,876 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:12,877 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:12,879 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 05:14:12,882 DEBUG [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 05:14:12,883 WARN [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-21 05:14:12,883 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:12,883 DEBUG [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 05:14:12,884 WARN [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-21 05:14:12,885 DEBUG [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 05:14:12,885 WARN [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-21 05:14:12,886 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 05:14:12,888 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 05:14:12,888 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 05:14:12,891 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:12,898 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 05:14:12,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 05:14:12,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 05:14:12,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 05:14:12,901 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:12,903 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 05:14:12,905 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 05:14:12,905 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 05:14:12,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 05:14:12,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 05:14:12,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689916452911,5,FailOnTimeoutGroup] 2023-07-21 05:14:12,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689916452914,5,FailOnTimeoutGroup] 2023-07-21 05:14:12,914 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:12,915 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 05:14:12,917 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:12,917 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:12,971 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:12,973 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:12,973 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf 2023-07-21 05:14:13,005 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:13,008 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 05:14:13,015 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/info 2023-07-21 05:14:13,016 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 05:14:13,017 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:13,018 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 05:14:13,020 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/rep_barrier 2023-07-21 05:14:13,021 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 05:14:13,022 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:13,022 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 05:14:13,025 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/table 2023-07-21 05:14:13,026 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 05:14:13,027 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:13,029 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740 2023-07-21 05:14:13,031 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740 2023-07-21 05:14:13,035 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 05:14:13,038 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 05:14:13,047 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:13,048 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11809733440, jitterRate=0.09986713528633118}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 05:14:13,048 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 05:14:13,049 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 05:14:13,049 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 05:14:13,049 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 05:14:13,049 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 05:14:13,049 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 05:14:13,050 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 05:14:13,050 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 05:14:13,060 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 05:14:13,060 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 05:14:13,071 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 05:14:13,084 INFO [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42467,1689916449058 with isa=jenkins-hbase4.apache.org/172.31.14.131:40677, startcode=1689916451367 2023-07-21 05:14:13,085 INFO [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42467,1689916449058 with isa=jenkins-hbase4.apache.org/172.31.14.131:42315, startcode=1689916451166 2023-07-21 05:14:13,086 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 05:14:13,088 INFO [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42467,1689916449058 with isa=jenkins-hbase4.apache.org/172.31.14.131:42093, startcode=1689916451283 2023-07-21 05:14:13,091 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42467] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:13,093 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42467,1689916449058] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 05:14:13,094 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42467,1689916449058] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 05:14:13,097 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 05:14:13,101 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42467] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:13,102 DEBUG [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf 2023-07-21 05:14:13,102 DEBUG [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38517 2023-07-21 05:14:13,102 DEBUG [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43335 2023-07-21 05:14:13,106 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42467,1689916449058] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 05:14:13,107 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42467,1689916449058] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 05:14:13,108 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42467] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:13,109 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42467,1689916449058] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 05:14:13,109 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42467,1689916449058] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 05:14:13,110 DEBUG [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf 2023-07-21 05:14:13,110 DEBUG [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf 2023-07-21 05:14:13,110 DEBUG [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38517 2023-07-21 05:14:13,110 DEBUG [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38517 2023-07-21 05:14:13,110 DEBUG [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43335 2023-07-21 05:14:13,110 DEBUG [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43335 2023-07-21 05:14:13,112 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:13,117 DEBUG [RS:2;jenkins-hbase4:40677] zookeeper.ZKUtil(162): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:13,117 WARN [RS:2;jenkins-hbase4:40677] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 05:14:13,117 INFO [RS:2;jenkins-hbase4:40677] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:13,117 DEBUG [RS:1;jenkins-hbase4:42093] zookeeper.ZKUtil(162): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:13,117 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42093,1689916451283] 2023-07-21 05:14:13,117 DEBUG [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/WALs/jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:13,117 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40677,1689916451367] 2023-07-21 05:14:13,118 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42315,1689916451166] 2023-07-21 05:14:13,118 DEBUG [RS:0;jenkins-hbase4:42315] zookeeper.ZKUtil(162): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:13,117 WARN [RS:1;jenkins-hbase4:42093] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 05:14:13,119 WARN [RS:0;jenkins-hbase4:42315] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 05:14:13,119 INFO [RS:1;jenkins-hbase4:42093] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:13,119 INFO [RS:0;jenkins-hbase4:42315] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:13,123 DEBUG [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/WALs/jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:13,123 DEBUG [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/WALs/jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:13,176 DEBUG [RS:1;jenkins-hbase4:42093] zookeeper.ZKUtil(162): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:13,176 DEBUG [RS:2;jenkins-hbase4:40677] zookeeper.ZKUtil(162): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:13,176 DEBUG [RS:1;jenkins-hbase4:42093] zookeeper.ZKUtil(162): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:13,177 DEBUG [RS:2;jenkins-hbase4:40677] zookeeper.ZKUtil(162): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:13,177 DEBUG [RS:0;jenkins-hbase4:42315] zookeeper.ZKUtil(162): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:13,177 DEBUG [RS:1;jenkins-hbase4:42093] zookeeper.ZKUtil(162): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:13,177 DEBUG [RS:2;jenkins-hbase4:40677] zookeeper.ZKUtil(162): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:13,178 DEBUG [RS:0;jenkins-hbase4:42315] zookeeper.ZKUtil(162): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:13,178 DEBUG [RS:0;jenkins-hbase4:42315] zookeeper.ZKUtil(162): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:13,190 DEBUG [RS:0;jenkins-hbase4:42315] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 05:14:13,190 DEBUG [RS:2;jenkins-hbase4:40677] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 05:14:13,190 DEBUG [RS:1;jenkins-hbase4:42093] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 05:14:13,202 INFO [RS:2;jenkins-hbase4:40677] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 05:14:13,202 INFO [RS:0;jenkins-hbase4:42315] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 05:14:13,202 INFO [RS:1;jenkins-hbase4:42093] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 05:14:13,227 INFO [RS:1;jenkins-hbase4:42093] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 05:14:13,227 INFO [RS:2;jenkins-hbase4:40677] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 05:14:13,227 INFO [RS:0;jenkins-hbase4:42315] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 05:14:13,234 INFO [RS:1;jenkins-hbase4:42093] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 05:14:13,234 INFO [RS:0;jenkins-hbase4:42315] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 05:14:13,235 INFO [RS:1;jenkins-hbase4:42093] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,234 INFO [RS:2;jenkins-hbase4:40677] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 05:14:13,235 INFO [RS:0;jenkins-hbase4:42315] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,236 INFO [RS:2;jenkins-hbase4:40677] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,236 INFO [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 05:14:13,239 INFO [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 05:14:13,239 INFO [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 05:14:13,248 INFO [RS:2;jenkins-hbase4:40677] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,248 INFO [RS:1;jenkins-hbase4:42093] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,248 INFO [RS:0;jenkins-hbase4:42315] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,248 DEBUG [RS:2;jenkins-hbase4:40677] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,248 DEBUG [RS:1;jenkins-hbase4:42093] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,249 DEBUG [RS:2;jenkins-hbase4:40677] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,249 DEBUG [RS:1;jenkins-hbase4:42093] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,249 DEBUG [RS:2;jenkins-hbase4:40677] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,249 DEBUG [RS:1;jenkins-hbase4:42093] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,249 DEBUG [RS:2;jenkins-hbase4:40677] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,249 DEBUG [RS:1;jenkins-hbase4:42093] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,249 DEBUG [RS:2;jenkins-hbase4:40677] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,249 DEBUG [RS:1;jenkins-hbase4:42093] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,250 DEBUG [RS:2;jenkins-hbase4:40677] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 05:14:13,250 DEBUG [RS:1;jenkins-hbase4:42093] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 05:14:13,250 DEBUG [RS:2;jenkins-hbase4:40677] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,249 DEBUG [RS:0;jenkins-hbase4:42315] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,250 DEBUG [RS:2;jenkins-hbase4:40677] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,250 DEBUG [RS:1;jenkins-hbase4:42093] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,250 DEBUG [RS:2;jenkins-hbase4:40677] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,249 DEBUG [jenkins-hbase4:42467] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 05:14:13,250 DEBUG [RS:2;jenkins-hbase4:40677] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,250 DEBUG [RS:1;jenkins-hbase4:42093] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,250 DEBUG [RS:0;jenkins-hbase4:42315] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,250 DEBUG [RS:1;jenkins-hbase4:42093] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,251 DEBUG [RS:0;jenkins-hbase4:42315] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,251 DEBUG [RS:1;jenkins-hbase4:42093] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,251 DEBUG [RS:0;jenkins-hbase4:42315] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,251 DEBUG [RS:0;jenkins-hbase4:42315] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,251 DEBUG [RS:0;jenkins-hbase4:42315] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 05:14:13,252 DEBUG [RS:0;jenkins-hbase4:42315] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,252 DEBUG [RS:0;jenkins-hbase4:42315] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,252 DEBUG [RS:0;jenkins-hbase4:42315] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,252 DEBUG [RS:0;jenkins-hbase4:42315] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:13,255 INFO [RS:2;jenkins-hbase4:40677] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,256 INFO [RS:2;jenkins-hbase4:40677] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,255 INFO [RS:0;jenkins-hbase4:42315] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,256 INFO [RS:2;jenkins-hbase4:40677] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,256 INFO [RS:1;jenkins-hbase4:42093] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,256 INFO [RS:0;jenkins-hbase4:42315] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,256 INFO [RS:1;jenkins-hbase4:42093] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,256 INFO [RS:0;jenkins-hbase4:42315] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,256 INFO [RS:1;jenkins-hbase4:42093] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,270 DEBUG [jenkins-hbase4:42467] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:13,271 DEBUG [jenkins-hbase4:42467] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:13,271 DEBUG [jenkins-hbase4:42467] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:13,271 DEBUG [jenkins-hbase4:42467] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:13,271 DEBUG [jenkins-hbase4:42467] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:13,274 INFO [RS:2;jenkins-hbase4:40677] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 05:14:13,274 INFO [RS:0;jenkins-hbase4:42315] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 05:14:13,276 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42093,1689916451283, state=OPENING 2023-07-21 05:14:13,274 INFO [RS:1;jenkins-hbase4:42093] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 05:14:13,279 INFO [RS:1;jenkins-hbase4:42093] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42093,1689916451283-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,279 INFO [RS:0;jenkins-hbase4:42315] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42315,1689916451166-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,279 INFO [RS:2;jenkins-hbase4:40677] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40677,1689916451367-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,286 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 05:14:13,288 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:13,289 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 05:14:13,294 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:13,310 INFO [RS:1;jenkins-hbase4:42093] regionserver.Replication(203): jenkins-hbase4.apache.org,42093,1689916451283 started 2023-07-21 05:14:13,311 INFO [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42093,1689916451283, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42093, sessionid=0x101864d20580002 2023-07-21 05:14:13,311 INFO [RS:2;jenkins-hbase4:40677] regionserver.Replication(203): jenkins-hbase4.apache.org,40677,1689916451367 started 2023-07-21 05:14:13,311 INFO [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40677,1689916451367, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40677, sessionid=0x101864d20580003 2023-07-21 05:14:13,311 DEBUG [RS:1;jenkins-hbase4:42093] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 05:14:13,312 DEBUG [RS:1;jenkins-hbase4:42093] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:13,312 DEBUG [RS:1;jenkins-hbase4:42093] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42093,1689916451283' 2023-07-21 05:14:13,312 DEBUG [RS:1;jenkins-hbase4:42093] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 05:14:13,312 INFO [RS:0;jenkins-hbase4:42315] regionserver.Replication(203): jenkins-hbase4.apache.org,42315,1689916451166 started 2023-07-21 05:14:13,312 INFO [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42315,1689916451166, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42315, sessionid=0x101864d20580001 2023-07-21 05:14:13,312 DEBUG [RS:0;jenkins-hbase4:42315] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 05:14:13,315 DEBUG [RS:2;jenkins-hbase4:40677] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 05:14:13,318 DEBUG [RS:2;jenkins-hbase4:40677] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:13,312 DEBUG [RS:0;jenkins-hbase4:42315] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:13,321 DEBUG [RS:0;jenkins-hbase4:42315] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42315,1689916451166' 2023-07-21 05:14:13,321 DEBUG [RS:0;jenkins-hbase4:42315] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 05:14:13,316 DEBUG [RS:1;jenkins-hbase4:42093] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 05:14:13,319 DEBUG [RS:2;jenkins-hbase4:40677] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40677,1689916451367' 2023-07-21 05:14:13,321 DEBUG [RS:2;jenkins-hbase4:40677] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 05:14:13,322 DEBUG [RS:1;jenkins-hbase4:42093] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 05:14:13,322 DEBUG [RS:1;jenkins-hbase4:42093] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 05:14:13,322 DEBUG [RS:1;jenkins-hbase4:42093] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:13,323 DEBUG [RS:1;jenkins-hbase4:42093] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42093,1689916451283' 2023-07-21 05:14:13,323 DEBUG [RS:1;jenkins-hbase4:42093] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 05:14:13,323 DEBUG [RS:0;jenkins-hbase4:42315] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 05:14:13,323 DEBUG [RS:2;jenkins-hbase4:40677] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 05:14:13,324 DEBUG [RS:0;jenkins-hbase4:42315] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 05:14:13,324 DEBUG [RS:0;jenkins-hbase4:42315] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 05:14:13,324 DEBUG [RS:2;jenkins-hbase4:40677] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 05:14:13,325 DEBUG [RS:2;jenkins-hbase4:40677] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 05:14:13,324 DEBUG [RS:1;jenkins-hbase4:42093] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 05:14:13,325 DEBUG [RS:2;jenkins-hbase4:40677] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:13,328 DEBUG [RS:2;jenkins-hbase4:40677] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40677,1689916451367' 2023-07-21 05:14:13,328 DEBUG [RS:2;jenkins-hbase4:40677] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 05:14:13,325 DEBUG [RS:0;jenkins-hbase4:42315] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:13,328 DEBUG [RS:0;jenkins-hbase4:42315] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42315,1689916451166' 2023-07-21 05:14:13,328 DEBUG [RS:1;jenkins-hbase4:42093] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 05:14:13,328 INFO [RS:1;jenkins-hbase4:42093] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 05:14:13,328 INFO [RS:1;jenkins-hbase4:42093] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 05:14:13,328 DEBUG [RS:0;jenkins-hbase4:42315] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 05:14:13,329 DEBUG [RS:2;jenkins-hbase4:40677] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 05:14:13,329 DEBUG [RS:2;jenkins-hbase4:40677] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 05:14:13,330 INFO [RS:2;jenkins-hbase4:40677] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 05:14:13,330 DEBUG [RS:0;jenkins-hbase4:42315] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 05:14:13,330 INFO [RS:2;jenkins-hbase4:40677] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 05:14:13,330 DEBUG [RS:0;jenkins-hbase4:42315] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 05:14:13,331 INFO [RS:0;jenkins-hbase4:42315] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 05:14:13,331 INFO [RS:0;jenkins-hbase4:42315] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 05:14:13,406 WARN [ReadOnlyZKClient-127.0.0.1:55013@0x546eadd7] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 05:14:13,437 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42467,1689916449058] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 05:14:13,444 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50522, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 05:14:13,445 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42093] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:50522 deadline: 1689916513445, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:13,447 INFO [RS:0;jenkins-hbase4:42315] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42315%2C1689916451166, suffix=, logDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/WALs/jenkins-hbase4.apache.org,42315,1689916451166, archiveDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/oldWALs, maxLogs=32 2023-07-21 05:14:13,448 INFO [RS:2;jenkins-hbase4:40677] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40677%2C1689916451367, suffix=, logDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/WALs/jenkins-hbase4.apache.org,40677,1689916451367, archiveDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/oldWALs, maxLogs=32 2023-07-21 05:14:13,449 INFO [RS:1;jenkins-hbase4:42093] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42093%2C1689916451283, suffix=, logDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/WALs/jenkins-hbase4.apache.org,42093,1689916451283, archiveDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/oldWALs, maxLogs=32 2023-07-21 05:14:13,483 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44623,DS-7c451091-9046-4b1d-8a3f-4d703150a8ab,DISK] 2023-07-21 05:14:13,493 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45983,DS-b78fc2a6-5cc1-456d-a1aa-9dc4e0ee367f,DISK] 2023-07-21 05:14:13,493 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38349,DS-bcb8a3a0-04d5-494e-8dff-602b9a3744dc,DISK] 2023-07-21 05:14:13,500 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38349,DS-bcb8a3a0-04d5-494e-8dff-602b9a3744dc,DISK] 2023-07-21 05:14:13,500 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45983,DS-b78fc2a6-5cc1-456d-a1aa-9dc4e0ee367f,DISK] 2023-07-21 05:14:13,504 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44623,DS-7c451091-9046-4b1d-8a3f-4d703150a8ab,DISK] 2023-07-21 05:14:13,512 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44623,DS-7c451091-9046-4b1d-8a3f-4d703150a8ab,DISK] 2023-07-21 05:14:13,513 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38349,DS-bcb8a3a0-04d5-494e-8dff-602b9a3744dc,DISK] 2023-07-21 05:14:13,513 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45983,DS-b78fc2a6-5cc1-456d-a1aa-9dc4e0ee367f,DISK] 2023-07-21 05:14:13,518 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:13,544 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 05:14:13,547 INFO [RS:2;jenkins-hbase4:40677] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/WALs/jenkins-hbase4.apache.org,40677,1689916451367/jenkins-hbase4.apache.org%2C40677%2C1689916451367.1689916453453 2023-07-21 05:14:13,547 INFO [RS:0;jenkins-hbase4:42315] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/WALs/jenkins-hbase4.apache.org,42315,1689916451166/jenkins-hbase4.apache.org%2C42315%2C1689916451166.1689916453451 2023-07-21 05:14:13,548 INFO [RS:1;jenkins-hbase4:42093] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/WALs/jenkins-hbase4.apache.org,42093,1689916451283/jenkins-hbase4.apache.org%2C42093%2C1689916451283.1689916453453 2023-07-21 05:14:13,556 DEBUG [RS:2;jenkins-hbase4:40677] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38349,DS-bcb8a3a0-04d5-494e-8dff-602b9a3744dc,DISK], DatanodeInfoWithStorage[127.0.0.1:44623,DS-7c451091-9046-4b1d-8a3f-4d703150a8ab,DISK], DatanodeInfoWithStorage[127.0.0.1:45983,DS-b78fc2a6-5cc1-456d-a1aa-9dc4e0ee367f,DISK]] 2023-07-21 05:14:13,558 DEBUG [RS:0;jenkins-hbase4:42315] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45983,DS-b78fc2a6-5cc1-456d-a1aa-9dc4e0ee367f,DISK], DatanodeInfoWithStorage[127.0.0.1:38349,DS-bcb8a3a0-04d5-494e-8dff-602b9a3744dc,DISK], DatanodeInfoWithStorage[127.0.0.1:44623,DS-7c451091-9046-4b1d-8a3f-4d703150a8ab,DISK]] 2023-07-21 05:14:13,570 DEBUG [RS:1;jenkins-hbase4:42093] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44623,DS-7c451091-9046-4b1d-8a3f-4d703150a8ab,DISK], DatanodeInfoWithStorage[127.0.0.1:45983,DS-b78fc2a6-5cc1-456d-a1aa-9dc4e0ee367f,DISK], DatanodeInfoWithStorage[127.0.0.1:38349,DS-bcb8a3a0-04d5-494e-8dff-602b9a3744dc,DISK]] 2023-07-21 05:14:13,565 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50538, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 05:14:13,593 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 05:14:13,593 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:13,598 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42093%2C1689916451283.meta, suffix=.meta, logDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/WALs/jenkins-hbase4.apache.org,42093,1689916451283, archiveDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/oldWALs, maxLogs=32 2023-07-21 05:14:13,622 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45983,DS-b78fc2a6-5cc1-456d-a1aa-9dc4e0ee367f,DISK] 2023-07-21 05:14:13,622 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38349,DS-bcb8a3a0-04d5-494e-8dff-602b9a3744dc,DISK] 2023-07-21 05:14:13,622 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44623,DS-7c451091-9046-4b1d-8a3f-4d703150a8ab,DISK] 2023-07-21 05:14:13,629 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/WALs/jenkins-hbase4.apache.org,42093,1689916451283/jenkins-hbase4.apache.org%2C42093%2C1689916451283.meta.1689916453600.meta 2023-07-21 05:14:13,629 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45983,DS-b78fc2a6-5cc1-456d-a1aa-9dc4e0ee367f,DISK], DatanodeInfoWithStorage[127.0.0.1:44623,DS-7c451091-9046-4b1d-8a3f-4d703150a8ab,DISK], DatanodeInfoWithStorage[127.0.0.1:38349,DS-bcb8a3a0-04d5-494e-8dff-602b9a3744dc,DISK]] 2023-07-21 05:14:13,630 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:13,632 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 05:14:13,636 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 05:14:13,639 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 05:14:13,646 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 05:14:13,646 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:13,646 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 05:14:13,646 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 05:14:13,650 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 05:14:13,652 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/info 2023-07-21 05:14:13,652 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/info 2023-07-21 05:14:13,653 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 05:14:13,653 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:13,654 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 05:14:13,655 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/rep_barrier 2023-07-21 05:14:13,655 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/rep_barrier 2023-07-21 05:14:13,656 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 05:14:13,657 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:13,657 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 05:14:13,659 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/table 2023-07-21 05:14:13,659 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/table 2023-07-21 05:14:13,659 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 05:14:13,660 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:13,661 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740 2023-07-21 05:14:13,664 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740 2023-07-21 05:14:13,668 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 05:14:13,670 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 05:14:13,672 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11915836480, jitterRate=0.10974875092506409}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 05:14:13,672 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 05:14:13,688 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689916453514 2023-07-21 05:14:13,715 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 05:14:13,720 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 05:14:13,721 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42093,1689916451283, state=OPEN 2023-07-21 05:14:13,725 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 05:14:13,725 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 05:14:13,730 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 05:14:13,730 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42093,1689916451283 in 431 msec 2023-07-21 05:14:13,735 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 05:14:13,736 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 660 msec 2023-07-21 05:14:13,741 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0870 sec 2023-07-21 05:14:13,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689916453742, completionTime=-1 2023-07-21 05:14:13,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 05:14:13,742 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 05:14:13,812 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 05:14:13,812 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689916513812 2023-07-21 05:14:13,812 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689916573812 2023-07-21 05:14:13,812 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 70 msec 2023-07-21 05:14:13,831 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42467,1689916449058-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,832 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42467,1689916449058-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,832 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42467,1689916449058-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,834 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:42467, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,834 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:13,841 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 05:14:13,856 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 05:14:13,858 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:13,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 05:14:13,872 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:13,875 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 05:14:13,891 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/hbase/namespace/e9f604e2452442c1f9af258e734bdc77 2023-07-21 05:14:13,894 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/hbase/namespace/e9f604e2452442c1f9af258e734bdc77 empty. 2023-07-21 05:14:13,895 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/hbase/namespace/e9f604e2452442c1f9af258e734bdc77 2023-07-21 05:14:13,895 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 05:14:13,969 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42467,1689916449058] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:13,972 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42467,1689916449058] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 05:14:13,975 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:13,977 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 05:14:13,981 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:13,982 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22 empty. 2023-07-21 05:14:13,983 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:13,983 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 05:14:14,015 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:14,019 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => ede3ac9f206f1997341b19733c39fd22, NAME => 'hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:14,040 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:14,040 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing ede3ac9f206f1997341b19733c39fd22, disabling compactions & flushes 2023-07-21 05:14:14,041 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:14,041 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:14,041 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. after waiting 0 ms 2023-07-21 05:14:14,041 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:14,041 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:14,041 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for ede3ac9f206f1997341b19733c39fd22: 2023-07-21 05:14:14,045 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 05:14:14,065 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689916454049"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916454049"}]},"ts":"1689916454049"} 2023-07-21 05:14:14,103 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 05:14:14,105 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 05:14:14,111 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916454105"}]},"ts":"1689916454105"} 2023-07-21 05:14:14,118 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 05:14:14,122 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:14,123 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:14,123 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:14,123 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:14,123 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:14,125 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=ede3ac9f206f1997341b19733c39fd22, ASSIGN}] 2023-07-21 05:14:14,128 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=ede3ac9f206f1997341b19733c39fd22, ASSIGN 2023-07-21 05:14:14,130 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=ede3ac9f206f1997341b19733c39fd22, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40677,1689916451367; forceNewPlan=false, retain=false 2023-07-21 05:14:14,281 INFO [jenkins-hbase4:42467] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 05:14:14,282 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=ede3ac9f206f1997341b19733c39fd22, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:14,283 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689916454282"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916454282"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916454282"}]},"ts":"1689916454282"} 2023-07-21 05:14:14,288 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE; OpenRegionProcedure ede3ac9f206f1997341b19733c39fd22, server=jenkins-hbase4.apache.org,40677,1689916451367}] 2023-07-21 05:14:14,344 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:14,351 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => e9f604e2452442c1f9af258e734bdc77, NAME => 'hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:14,389 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:14,389 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing e9f604e2452442c1f9af258e734bdc77, disabling compactions & flushes 2023-07-21 05:14:14,389 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77. 2023-07-21 05:14:14,389 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77. 2023-07-21 05:14:14,389 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77. after waiting 0 ms 2023-07-21 05:14:14,389 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77. 2023-07-21 05:14:14,389 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77. 2023-07-21 05:14:14,389 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for e9f604e2452442c1f9af258e734bdc77: 2023-07-21 05:14:14,394 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 05:14:14,399 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689916454399"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916454399"}]},"ts":"1689916454399"} 2023-07-21 05:14:14,404 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 05:14:14,406 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 05:14:14,406 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916454406"}]},"ts":"1689916454406"} 2023-07-21 05:14:14,409 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 05:14:14,415 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:14,415 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:14,415 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:14,415 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:14,415 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:14,416 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e9f604e2452442c1f9af258e734bdc77, ASSIGN}] 2023-07-21 05:14:14,419 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e9f604e2452442c1f9af258e734bdc77, ASSIGN 2023-07-21 05:14:14,421 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e9f604e2452442c1f9af258e734bdc77, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42315,1689916451166; forceNewPlan=false, retain=false 2023-07-21 05:14:14,443 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:14,443 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 05:14:14,446 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57336, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 05:14:14,453 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:14,453 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ede3ac9f206f1997341b19733c39fd22, NAME => 'hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:14,454 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 05:14:14,454 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. service=MultiRowMutationService 2023-07-21 05:14:14,454 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 05:14:14,455 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:14,455 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:14,455 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:14,455 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:14,458 INFO [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:14,460 DEBUG [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/m 2023-07-21 05:14:14,460 DEBUG [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/m 2023-07-21 05:14:14,461 INFO [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ede3ac9f206f1997341b19733c39fd22 columnFamilyName m 2023-07-21 05:14:14,461 INFO [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] regionserver.HStore(310): Store=ede3ac9f206f1997341b19733c39fd22/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:14,463 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:14,465 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:14,472 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:14,476 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:14,477 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ede3ac9f206f1997341b19733c39fd22; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@b396551, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:14,477 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ede3ac9f206f1997341b19733c39fd22: 2023-07-21 05:14:14,479 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22., pid=7, masterSystemTime=1689916454443 2023-07-21 05:14:14,484 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:14,484 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:14,486 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=ede3ac9f206f1997341b19733c39fd22, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:14,486 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689916454485"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916454485"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916454485"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916454485"}]},"ts":"1689916454485"} 2023-07-21 05:14:14,495 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-21 05:14:14,495 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; OpenRegionProcedure ede3ac9f206f1997341b19733c39fd22, server=jenkins-hbase4.apache.org,40677,1689916451367 in 203 msec 2023-07-21 05:14:14,499 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-21 05:14:14,500 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=ede3ac9f206f1997341b19733c39fd22, ASSIGN in 370 msec 2023-07-21 05:14:14,501 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 05:14:14,501 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916454501"}]},"ts":"1689916454501"} 2023-07-21 05:14:14,503 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 05:14:14,507 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 05:14:14,509 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 538 msec 2023-07-21 05:14:14,571 INFO [jenkins-hbase4:42467] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 05:14:14,573 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=e9f604e2452442c1f9af258e734bdc77, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:14,573 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689916454572"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916454572"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916454572"}]},"ts":"1689916454572"} 2023-07-21 05:14:14,577 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure e9f604e2452442c1f9af258e734bdc77, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:14,605 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42467,1689916449058] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 05:14:14,607 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57338, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 05:14:14,611 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42467,1689916449058] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 05:14:14,611 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42467,1689916449058] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 05:14:14,727 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:14,727 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42467,1689916449058] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:14,731 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42467,1689916449058] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 05:14:14,733 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:14,733 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 05:14:14,739 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42467,1689916449058] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 05:14:14,739 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50688, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 05:14:14,746 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77. 2023-07-21 05:14:14,746 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e9f604e2452442c1f9af258e734bdc77, NAME => 'hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:14,746 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e9f604e2452442c1f9af258e734bdc77 2023-07-21 05:14:14,747 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:14,747 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e9f604e2452442c1f9af258e734bdc77 2023-07-21 05:14:14,747 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e9f604e2452442c1f9af258e734bdc77 2023-07-21 05:14:14,749 INFO [StoreOpener-e9f604e2452442c1f9af258e734bdc77-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e9f604e2452442c1f9af258e734bdc77 2023-07-21 05:14:14,752 DEBUG [StoreOpener-e9f604e2452442c1f9af258e734bdc77-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/namespace/e9f604e2452442c1f9af258e734bdc77/info 2023-07-21 05:14:14,752 DEBUG [StoreOpener-e9f604e2452442c1f9af258e734bdc77-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/namespace/e9f604e2452442c1f9af258e734bdc77/info 2023-07-21 05:14:14,752 INFO [StoreOpener-e9f604e2452442c1f9af258e734bdc77-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e9f604e2452442c1f9af258e734bdc77 columnFamilyName info 2023-07-21 05:14:14,753 INFO [StoreOpener-e9f604e2452442c1f9af258e734bdc77-1] regionserver.HStore(310): Store=e9f604e2452442c1f9af258e734bdc77/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:14,754 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/namespace/e9f604e2452442c1f9af258e734bdc77 2023-07-21 05:14:14,755 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/namespace/e9f604e2452442c1f9af258e734bdc77 2023-07-21 05:14:14,760 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e9f604e2452442c1f9af258e734bdc77 2023-07-21 05:14:14,764 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/namespace/e9f604e2452442c1f9af258e734bdc77/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:14,765 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e9f604e2452442c1f9af258e734bdc77; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11896844800, jitterRate=0.10798001289367676}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:14,765 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e9f604e2452442c1f9af258e734bdc77: 2023-07-21 05:14:14,769 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77., pid=9, masterSystemTime=1689916454733 2023-07-21 05:14:14,773 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77. 2023-07-21 05:14:14,773 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77. 2023-07-21 05:14:14,774 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=e9f604e2452442c1f9af258e734bdc77, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:14,774 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689916454774"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916454774"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916454774"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916454774"}]},"ts":"1689916454774"} 2023-07-21 05:14:14,780 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-21 05:14:14,780 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure e9f604e2452442c1f9af258e734bdc77, server=jenkins-hbase4.apache.org,42315,1689916451166 in 200 msec 2023-07-21 05:14:14,785 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=4 2023-07-21 05:14:14,785 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e9f604e2452442c1f9af258e734bdc77, ASSIGN in 364 msec 2023-07-21 05:14:14,786 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 05:14:14,787 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916454786"}]},"ts":"1689916454786"} 2023-07-21 05:14:14,789 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 05:14:14,792 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 05:14:14,795 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 933 msec 2023-07-21 05:14:14,872 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 05:14:14,874 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 05:14:14,875 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:14,879 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 05:14:14,882 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50698, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 05:14:14,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 05:14:14,916 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 05:14:14,921 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 33 msec 2023-07-21 05:14:14,934 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 05:14:14,945 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 05:14:14,950 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 16 msec 2023-07-21 05:14:14,960 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 05:14:14,963 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 05:14:14,963 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.502sec 2023-07-21 05:14:14,965 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-21 05:14:14,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 05:14:14,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 05:14:14,968 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42467,1689916449058-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 05:14:14,969 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42467,1689916449058-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 05:14:14,978 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 05:14:15,027 DEBUG [Listener at localhost/34619] zookeeper.ReadOnlyZKClient(139): Connect 0x083f3c49 to 127.0.0.1:55013 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:15,034 DEBUG [Listener at localhost/34619] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62c69654, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:15,067 DEBUG [hconnection-0xfdeaa0f-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 05:14:15,083 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50554, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 05:14:15,095 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,42467,1689916449058 2023-07-21 05:14:15,096 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:15,107 DEBUG [Listener at localhost/34619] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 05:14:15,113 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40408, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 05:14:15,137 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 05:14:15,137 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:15,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-21 05:14:15,146 DEBUG [Listener at localhost/34619] zookeeper.ReadOnlyZKClient(139): Connect 0x1ad901ea to 127.0.0.1:55013 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:15,155 DEBUG [Listener at localhost/34619] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3604583d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:15,155 INFO [Listener at localhost/34619] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:55013 2023-07-21 05:14:15,173 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 05:14:15,179 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101864d2058000a connected 2023-07-21 05:14:15,218 INFO [Listener at localhost/34619] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=426, OpenFileDescriptor=679, MaxFileDescriptor=60000, SystemLoadAverage=527, ProcessCount=178, AvailableMemoryMB=4516 2023-07-21 05:14:15,220 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-21 05:14:15,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:15,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:15,314 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 05:14:15,330 INFO [Listener at localhost/34619] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 05:14:15,330 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:15,331 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:15,331 INFO [Listener at localhost/34619] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 05:14:15,331 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:15,331 INFO [Listener at localhost/34619] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 05:14:15,331 INFO [Listener at localhost/34619] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 05:14:15,336 INFO [Listener at localhost/34619] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33541 2023-07-21 05:14:15,336 INFO [Listener at localhost/34619] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 05:14:15,338 DEBUG [Listener at localhost/34619] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 05:14:15,339 INFO [Listener at localhost/34619] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:15,344 INFO [Listener at localhost/34619] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:15,348 INFO [Listener at localhost/34619] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33541 connecting to ZooKeeper ensemble=127.0.0.1:55013 2023-07-21 05:14:15,352 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:335410x0, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 05:14:15,353 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33541-0x101864d2058000b connected 2023-07-21 05:14:15,353 DEBUG [Listener at localhost/34619] zookeeper.ZKUtil(162): regionserver:33541-0x101864d2058000b, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 05:14:15,355 DEBUG [Listener at localhost/34619] zookeeper.ZKUtil(162): regionserver:33541-0x101864d2058000b, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 05:14:15,356 DEBUG [Listener at localhost/34619] zookeeper.ZKUtil(164): regionserver:33541-0x101864d2058000b, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 05:14:15,363 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33541 2023-07-21 05:14:15,363 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33541 2023-07-21 05:14:15,368 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33541 2023-07-21 05:14:15,375 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33541 2023-07-21 05:14:15,375 DEBUG [Listener at localhost/34619] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33541 2023-07-21 05:14:15,378 INFO [Listener at localhost/34619] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 05:14:15,378 INFO [Listener at localhost/34619] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 05:14:15,378 INFO [Listener at localhost/34619] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 05:14:15,379 INFO [Listener at localhost/34619] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 05:14:15,379 INFO [Listener at localhost/34619] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 05:14:15,379 INFO [Listener at localhost/34619] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 05:14:15,380 INFO [Listener at localhost/34619] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 05:14:15,380 INFO [Listener at localhost/34619] http.HttpServer(1146): Jetty bound to port 46017 2023-07-21 05:14:15,381 INFO [Listener at localhost/34619] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 05:14:15,387 INFO [Listener at localhost/34619] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:15,388 INFO [Listener at localhost/34619] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2a8106ca{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/hadoop.log.dir/,AVAILABLE} 2023-07-21 05:14:15,388 INFO [Listener at localhost/34619] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:15,389 INFO [Listener at localhost/34619] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39ac2a37{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-21 05:14:15,400 INFO [Listener at localhost/34619] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 05:14:15,401 INFO [Listener at localhost/34619] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 05:14:15,401 INFO [Listener at localhost/34619] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 05:14:15,402 INFO [Listener at localhost/34619] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 05:14:15,403 INFO [Listener at localhost/34619] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:15,404 INFO [Listener at localhost/34619] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@69f161b2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:15,406 INFO [Listener at localhost/34619] server.AbstractConnector(333): Started ServerConnector@2373ab06{HTTP/1.1, (http/1.1)}{0.0.0.0:46017} 2023-07-21 05:14:15,406 INFO [Listener at localhost/34619] server.Server(415): Started @12529ms 2023-07-21 05:14:15,417 INFO [RS:3;jenkins-hbase4:33541] regionserver.HRegionServer(951): ClusterId : 5e5a5491-4a64-49c9-9fbd-7c0bc221024b 2023-07-21 05:14:15,417 DEBUG [RS:3;jenkins-hbase4:33541] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 05:14:15,420 DEBUG [RS:3;jenkins-hbase4:33541] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 05:14:15,420 DEBUG [RS:3;jenkins-hbase4:33541] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 05:14:15,423 DEBUG [RS:3;jenkins-hbase4:33541] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 05:14:15,425 DEBUG [RS:3;jenkins-hbase4:33541] zookeeper.ReadOnlyZKClient(139): Connect 0x28e182c6 to 127.0.0.1:55013 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:15,430 DEBUG [RS:3;jenkins-hbase4:33541] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@262679aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:15,431 DEBUG [RS:3;jenkins-hbase4:33541] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2219e9b6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 05:14:15,440 DEBUG [RS:3;jenkins-hbase4:33541] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:33541 2023-07-21 05:14:15,441 INFO [RS:3;jenkins-hbase4:33541] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 05:14:15,441 INFO [RS:3;jenkins-hbase4:33541] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 05:14:15,441 DEBUG [RS:3;jenkins-hbase4:33541] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 05:14:15,443 INFO [RS:3;jenkins-hbase4:33541] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42467,1689916449058 with isa=jenkins-hbase4.apache.org/172.31.14.131:33541, startcode=1689916455330 2023-07-21 05:14:15,443 DEBUG [RS:3;jenkins-hbase4:33541] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 05:14:15,455 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42947, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 05:14:15,456 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42467] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:15,456 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42467,1689916449058] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 05:14:15,463 DEBUG [RS:3;jenkins-hbase4:33541] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf 2023-07-21 05:14:15,463 DEBUG [RS:3;jenkins-hbase4:33541] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38517 2023-07-21 05:14:15,463 DEBUG [RS:3;jenkins-hbase4:33541] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43335 2023-07-21 05:14:15,472 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:15,472 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:15,472 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:15,472 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42467,1689916449058] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:15,472 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:15,473 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33541,1689916455330] 2023-07-21 05:14:15,474 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42467,1689916449058] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 05:14:15,474 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:15,474 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:15,474 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:15,479 DEBUG [RS:3;jenkins-hbase4:33541] zookeeper.ZKUtil(162): regionserver:33541-0x101864d2058000b, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:15,479 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:15,479 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:15,479 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:15,479 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42467,1689916449058] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 05:14:15,479 WARN [RS:3;jenkins-hbase4:33541] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 05:14:15,480 INFO [RS:3;jenkins-hbase4:33541] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:15,480 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:15,480 DEBUG [RS:3;jenkins-hbase4:33541] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/WALs/jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:15,480 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:15,480 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:15,480 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:15,481 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:15,481 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:15,487 DEBUG [RS:3;jenkins-hbase4:33541] zookeeper.ZKUtil(162): regionserver:33541-0x101864d2058000b, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:15,492 DEBUG [RS:3;jenkins-hbase4:33541] zookeeper.ZKUtil(162): regionserver:33541-0x101864d2058000b, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:15,493 DEBUG [RS:3;jenkins-hbase4:33541] zookeeper.ZKUtil(162): regionserver:33541-0x101864d2058000b, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:15,493 DEBUG [RS:3;jenkins-hbase4:33541] zookeeper.ZKUtil(162): regionserver:33541-0x101864d2058000b, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:15,495 DEBUG [RS:3;jenkins-hbase4:33541] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 05:14:15,495 INFO [RS:3;jenkins-hbase4:33541] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 05:14:15,507 INFO [RS:3;jenkins-hbase4:33541] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 05:14:15,507 INFO [RS:3;jenkins-hbase4:33541] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 05:14:15,507 INFO [RS:3;jenkins-hbase4:33541] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:15,510 INFO [RS:3;jenkins-hbase4:33541] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 05:14:15,516 INFO [RS:3;jenkins-hbase4:33541] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:15,516 DEBUG [RS:3;jenkins-hbase4:33541] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:15,516 DEBUG [RS:3;jenkins-hbase4:33541] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:15,516 DEBUG [RS:3;jenkins-hbase4:33541] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:15,516 DEBUG [RS:3;jenkins-hbase4:33541] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:15,516 DEBUG [RS:3;jenkins-hbase4:33541] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:15,516 DEBUG [RS:3;jenkins-hbase4:33541] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 05:14:15,516 DEBUG [RS:3;jenkins-hbase4:33541] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:15,516 DEBUG [RS:3;jenkins-hbase4:33541] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:15,516 DEBUG [RS:3;jenkins-hbase4:33541] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:15,517 DEBUG [RS:3;jenkins-hbase4:33541] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:15,520 INFO [RS:3;jenkins-hbase4:33541] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:15,520 INFO [RS:3;jenkins-hbase4:33541] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:15,520 INFO [RS:3;jenkins-hbase4:33541] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:15,535 INFO [RS:3;jenkins-hbase4:33541] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 05:14:15,536 INFO [RS:3;jenkins-hbase4:33541] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33541,1689916455330-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:15,548 INFO [RS:3;jenkins-hbase4:33541] regionserver.Replication(203): jenkins-hbase4.apache.org,33541,1689916455330 started 2023-07-21 05:14:15,548 INFO [RS:3;jenkins-hbase4:33541] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33541,1689916455330, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33541, sessionid=0x101864d2058000b 2023-07-21 05:14:15,548 DEBUG [RS:3;jenkins-hbase4:33541] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 05:14:15,548 DEBUG [RS:3;jenkins-hbase4:33541] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:15,548 DEBUG [RS:3;jenkins-hbase4:33541] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33541,1689916455330' 2023-07-21 05:14:15,548 DEBUG [RS:3;jenkins-hbase4:33541] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 05:14:15,549 DEBUG [RS:3;jenkins-hbase4:33541] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 05:14:15,550 DEBUG [RS:3;jenkins-hbase4:33541] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 05:14:15,550 DEBUG [RS:3;jenkins-hbase4:33541] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 05:14:15,550 DEBUG [RS:3;jenkins-hbase4:33541] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:15,550 DEBUG [RS:3;jenkins-hbase4:33541] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33541,1689916455330' 2023-07-21 05:14:15,550 DEBUG [RS:3;jenkins-hbase4:33541] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 05:14:15,550 DEBUG [RS:3;jenkins-hbase4:33541] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 05:14:15,551 DEBUG [RS:3;jenkins-hbase4:33541] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 05:14:15,551 INFO [RS:3;jenkins-hbase4:33541] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 05:14:15,551 INFO [RS:3;jenkins-hbase4:33541] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 05:14:15,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:15,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:15,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:15,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:15,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:15,577 DEBUG [hconnection-0x78ef668c-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 05:14:15,591 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50570, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 05:14:15,596 DEBUG [hconnection-0x78ef668c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 05:14:15,599 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57346, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 05:14:15,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:15,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:15,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42467] to rsgroup master 2023-07-21 05:14:15,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:15,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:40408 deadline: 1689917655613, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. 2023-07-21 05:14:15,615 WARN [Listener at localhost/34619] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:15,618 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:15,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:15,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:15,620 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:42093, jenkins-hbase4.apache.org:42315], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:15,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:15,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:15,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:15,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:15,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:15,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:15,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:15,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:15,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:15,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:15,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:15,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:15,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:33541] to rsgroup Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:15,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:15,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:15,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:15,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:15,656 INFO [RS:3;jenkins-hbase4:33541] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33541%2C1689916455330, suffix=, logDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/WALs/jenkins-hbase4.apache.org,33541,1689916455330, archiveDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/oldWALs, maxLogs=32 2023-07-21 05:14:15,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(238): Moving server region ede3ac9f206f1997341b19733c39fd22, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:15,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:15,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:15,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:15,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:15,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:15,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=ede3ac9f206f1997341b19733c39fd22, REOPEN/MOVE 2023-07-21 05:14:15,662 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=ede3ac9f206f1997341b19733c39fd22, REOPEN/MOVE 2023-07-21 05:14:15,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 05:14:15,664 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=ede3ac9f206f1997341b19733c39fd22, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:15,664 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689916455663"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916455663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916455663"}]},"ts":"1689916455663"} 2023-07-21 05:14:15,667 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure ede3ac9f206f1997341b19733c39fd22, server=jenkins-hbase4.apache.org,40677,1689916451367}] 2023-07-21 05:14:15,687 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44623,DS-7c451091-9046-4b1d-8a3f-4d703150a8ab,DISK] 2023-07-21 05:14:15,687 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45983,DS-b78fc2a6-5cc1-456d-a1aa-9dc4e0ee367f,DISK] 2023-07-21 05:14:15,687 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38349,DS-bcb8a3a0-04d5-494e-8dff-602b9a3744dc,DISK] 2023-07-21 05:14:15,695 INFO [RS:3;jenkins-hbase4:33541] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/WALs/jenkins-hbase4.apache.org,33541,1689916455330/jenkins-hbase4.apache.org%2C33541%2C1689916455330.1689916455657 2023-07-21 05:14:15,698 DEBUG [RS:3;jenkins-hbase4:33541] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44623,DS-7c451091-9046-4b1d-8a3f-4d703150a8ab,DISK], DatanodeInfoWithStorage[127.0.0.1:45983,DS-b78fc2a6-5cc1-456d-a1aa-9dc4e0ee367f,DISK], DatanodeInfoWithStorage[127.0.0.1:38349,DS-bcb8a3a0-04d5-494e-8dff-602b9a3744dc,DISK]] 2023-07-21 05:14:15,849 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:15,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ede3ac9f206f1997341b19733c39fd22, disabling compactions & flushes 2023-07-21 05:14:15,850 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:15,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:15,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. after waiting 0 ms 2023-07-21 05:14:15,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:15,851 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing ede3ac9f206f1997341b19733c39fd22 1/1 column families, dataSize=1.38 KB heapSize=2.37 KB 2023-07-21 05:14:15,991 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/.tmp/m/9b73a5e2511842c197f2fb115fc1d18f 2023-07-21 05:14:16,054 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/.tmp/m/9b73a5e2511842c197f2fb115fc1d18f as hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/m/9b73a5e2511842c197f2fb115fc1d18f 2023-07-21 05:14:16,067 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/m/9b73a5e2511842c197f2fb115fc1d18f, entries=3, sequenceid=9, filesize=5.2 K 2023-07-21 05:14:16,071 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1418, heapSize ~2.35 KB/2408, currentSize=0 B/0 for ede3ac9f206f1997341b19733c39fd22 in 220ms, sequenceid=9, compaction requested=false 2023-07-21 05:14:16,074 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 05:14:16,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-21 05:14:16,091 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 05:14:16,091 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:16,091 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ede3ac9f206f1997341b19733c39fd22: 2023-07-21 05:14:16,091 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ede3ac9f206f1997341b19733c39fd22 move to jenkins-hbase4.apache.org,42093,1689916451283 record at close sequenceid=9 2023-07-21 05:14:16,096 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:16,097 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=ede3ac9f206f1997341b19733c39fd22, regionState=CLOSED 2023-07-21 05:14:16,097 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689916456097"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916456097"}]},"ts":"1689916456097"} 2023-07-21 05:14:16,108 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-21 05:14:16,108 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure ede3ac9f206f1997341b19733c39fd22, server=jenkins-hbase4.apache.org,40677,1689916451367 in 433 msec 2023-07-21 05:14:16,111 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=ede3ac9f206f1997341b19733c39fd22, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42093,1689916451283; forceNewPlan=false, retain=false 2023-07-21 05:14:16,261 INFO [jenkins-hbase4:42467] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 05:14:16,262 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=ede3ac9f206f1997341b19733c39fd22, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:16,262 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689916456261"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916456261"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916456261"}]},"ts":"1689916456261"} 2023-07-21 05:14:16,265 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure ede3ac9f206f1997341b19733c39fd22, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:16,424 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:16,424 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ede3ac9f206f1997341b19733c39fd22, NAME => 'hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:16,424 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 05:14:16,424 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. service=MultiRowMutationService 2023-07-21 05:14:16,425 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 05:14:16,425 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:16,425 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:16,425 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:16,425 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:16,428 INFO [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:16,429 DEBUG [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/m 2023-07-21 05:14:16,429 DEBUG [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/m 2023-07-21 05:14:16,430 INFO [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ede3ac9f206f1997341b19733c39fd22 columnFamilyName m 2023-07-21 05:14:16,441 DEBUG [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] regionserver.HStore(539): loaded hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/m/9b73a5e2511842c197f2fb115fc1d18f 2023-07-21 05:14:16,442 INFO [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] regionserver.HStore(310): Store=ede3ac9f206f1997341b19733c39fd22/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:16,443 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:16,447 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:16,451 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:16,453 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ede3ac9f206f1997341b19733c39fd22; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@5eba86e4, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:16,453 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ede3ac9f206f1997341b19733c39fd22: 2023-07-21 05:14:16,455 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22., pid=14, masterSystemTime=1689916456418 2023-07-21 05:14:16,458 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:16,458 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:16,459 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=ede3ac9f206f1997341b19733c39fd22, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:16,459 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689916456458"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916456458"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916456458"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916456458"}]},"ts":"1689916456458"} 2023-07-21 05:14:16,465 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-21 05:14:16,466 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure ede3ac9f206f1997341b19733c39fd22, server=jenkins-hbase4.apache.org,42093,1689916451283 in 197 msec 2023-07-21 05:14:16,468 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=ede3ac9f206f1997341b19733c39fd22, REOPEN/MOVE in 805 msec 2023-07-21 05:14:16,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-21 05:14:16,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33541,1689916455330, jenkins-hbase4.apache.org,40677,1689916451367] are moved back to default 2023-07-21 05:14:16,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:16,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:16,666 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40677] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:57346 deadline: 1689916516666, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42093 startCode=1689916451283. As of locationSeqNum=9. 2023-07-21 05:14:16,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:16,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:16,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:16,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:16,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:16,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:16,824 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:16,827 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40677] ipc.CallRunner(144): callId: 43 service: ClientService methodName: ExecService size: 622 connection: 172.31.14.131:57338 deadline: 1689916516827, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42093 startCode=1689916451283. As of locationSeqNum=9. 2023-07-21 05:14:16,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 15 2023-07-21 05:14:16,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 05:14:16,938 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:16,938 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:16,939 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:16,940 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:16,946 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 05:14:16,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 05:14:16,953 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:16,953 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:16,954 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701 empty. 2023-07-21 05:14:16,954 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:16,954 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:16,954 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:16,955 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742 empty. 2023-07-21 05:14:16,955 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:16,955 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8 empty. 2023-07-21 05:14:16,955 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:16,955 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74 empty. 2023-07-21 05:14:16,955 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb empty. 2023-07-21 05:14:16,956 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:16,956 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:16,956 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:16,956 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 05:14:16,982 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:16,984 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => d372936a1d61b9cd9ca0b4a2fc93afc8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:16,984 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5a58cb9338d5d9cf76f8851475b32701, NAME => 'Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:16,984 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 0a1a58c05c04c5c95f8bffa53fef4742, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:17,046 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:17,048 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing d372936a1d61b9cd9ca0b4a2fc93afc8, disabling compactions & flushes 2023-07-21 05:14:17,048 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:17,048 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:17,048 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. after waiting 0 ms 2023-07-21 05:14:17,048 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:17,048 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:17,048 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for d372936a1d61b9cd9ca0b4a2fc93afc8: 2023-07-21 05:14:17,049 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 9c7cd237c42cfc0e0416c47e641026bb, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:17,050 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:17,052 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 0a1a58c05c04c5c95f8bffa53fef4742, disabling compactions & flushes 2023-07-21 05:14:17,052 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:17,052 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:17,052 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. after waiting 0 ms 2023-07-21 05:14:17,052 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:17,052 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:17,052 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 0a1a58c05c04c5c95f8bffa53fef4742: 2023-07-21 05:14:17,053 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 88e6a38d8b6cdb7cd40ef970d574ab74, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:17,061 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:17,062 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 5a58cb9338d5d9cf76f8851475b32701, disabling compactions & flushes 2023-07-21 05:14:17,062 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:17,062 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:17,062 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. after waiting 0 ms 2023-07-21 05:14:17,062 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:17,062 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:17,062 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 5a58cb9338d5d9cf76f8851475b32701: 2023-07-21 05:14:17,093 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:17,095 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:17,095 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 88e6a38d8b6cdb7cd40ef970d574ab74, disabling compactions & flushes 2023-07-21 05:14:17,095 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 9c7cd237c42cfc0e0416c47e641026bb, disabling compactions & flushes 2023-07-21 05:14:17,095 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:17,095 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:17,095 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:17,095 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. after waiting 0 ms 2023-07-21 05:14:17,095 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:17,095 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:17,095 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 88e6a38d8b6cdb7cd40ef970d574ab74: 2023-07-21 05:14:17,095 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:17,095 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. after waiting 0 ms 2023-07-21 05:14:17,095 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:17,095 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:17,096 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 9c7cd237c42cfc0e0416c47e641026bb: 2023-07-21 05:14:17,107 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 05:14:17,109 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916457108"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916457108"}]},"ts":"1689916457108"} 2023-07-21 05:14:17,109 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916457108"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916457108"}]},"ts":"1689916457108"} 2023-07-21 05:14:17,109 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916457108"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916457108"}]},"ts":"1689916457108"} 2023-07-21 05:14:17,109 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916457108"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916457108"}]},"ts":"1689916457108"} 2023-07-21 05:14:17,110 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916457108"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916457108"}]},"ts":"1689916457108"} 2023-07-21 05:14:17,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 05:14:17,177 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 05:14:17,179 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 05:14:17,180 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916457179"}]},"ts":"1689916457179"} 2023-07-21 05:14:17,182 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-21 05:14:17,194 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:17,194 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:17,194 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:17,194 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:17,195 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5a58cb9338d5d9cf76f8851475b32701, ASSIGN}, {pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a1a58c05c04c5c95f8bffa53fef4742, ASSIGN}, {pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d372936a1d61b9cd9ca0b4a2fc93afc8, ASSIGN}, {pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9c7cd237c42cfc0e0416c47e641026bb, ASSIGN}, {pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88e6a38d8b6cdb7cd40ef970d574ab74, ASSIGN}] 2023-07-21 05:14:17,199 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d372936a1d61b9cd9ca0b4a2fc93afc8, ASSIGN 2023-07-21 05:14:17,199 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9c7cd237c42cfc0e0416c47e641026bb, ASSIGN 2023-07-21 05:14:17,201 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88e6a38d8b6cdb7cd40ef970d574ab74, ASSIGN 2023-07-21 05:14:17,201 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a1a58c05c04c5c95f8bffa53fef4742, ASSIGN 2023-07-21 05:14:17,202 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5a58cb9338d5d9cf76f8851475b32701, ASSIGN 2023-07-21 05:14:17,202 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9c7cd237c42cfc0e0416c47e641026bb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42093,1689916451283; forceNewPlan=false, retain=false 2023-07-21 05:14:17,203 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d372936a1d61b9cd9ca0b4a2fc93afc8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42093,1689916451283; forceNewPlan=false, retain=false 2023-07-21 05:14:17,203 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a1a58c05c04c5c95f8bffa53fef4742, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42315,1689916451166; forceNewPlan=false, retain=false 2023-07-21 05:14:17,203 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88e6a38d8b6cdb7cd40ef970d574ab74, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42315,1689916451166; forceNewPlan=false, retain=false 2023-07-21 05:14:17,204 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5a58cb9338d5d9cf76f8851475b32701, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42093,1689916451283; forceNewPlan=false, retain=false 2023-07-21 05:14:17,353 INFO [jenkins-hbase4:42467] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 05:14:17,356 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=88e6a38d8b6cdb7cd40ef970d574ab74, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:17,356 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=d372936a1d61b9cd9ca0b4a2fc93afc8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:17,356 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=9c7cd237c42cfc0e0416c47e641026bb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:17,356 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=0a1a58c05c04c5c95f8bffa53fef4742, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:17,356 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916457356"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916457356"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916457356"}]},"ts":"1689916457356"} 2023-07-21 05:14:17,356 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916457356"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916457356"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916457356"}]},"ts":"1689916457356"} 2023-07-21 05:14:17,356 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916457356"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916457356"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916457356"}]},"ts":"1689916457356"} 2023-07-21 05:14:17,356 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=5a58cb9338d5d9cf76f8851475b32701, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:17,356 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916457356"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916457356"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916457356"}]},"ts":"1689916457356"} 2023-07-21 05:14:17,357 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916457356"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916457356"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916457356"}]},"ts":"1689916457356"} 2023-07-21 05:14:17,360 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=18, state=RUNNABLE; OpenRegionProcedure d372936a1d61b9cd9ca0b4a2fc93afc8, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:17,361 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=20, state=RUNNABLE; OpenRegionProcedure 88e6a38d8b6cdb7cd40ef970d574ab74, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:17,363 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=23, ppid=19, state=RUNNABLE; OpenRegionProcedure 9c7cd237c42cfc0e0416c47e641026bb, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:17,368 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=17, state=RUNNABLE; OpenRegionProcedure 0a1a58c05c04c5c95f8bffa53fef4742, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:17,384 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=16, state=RUNNABLE; OpenRegionProcedure 5a58cb9338d5d9cf76f8851475b32701, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:17,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 05:14:17,542 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:17,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d372936a1d61b9cd9ca0b4a2fc93afc8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 05:14:17,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:17,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:17,546 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:17,546 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:17,546 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:17,546 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 88e6a38d8b6cdb7cd40ef970d574ab74, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 05:14:17,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:17,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:17,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:17,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:17,547 INFO [StoreOpener-d372936a1d61b9cd9ca0b4a2fc93afc8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:17,549 INFO [StoreOpener-88e6a38d8b6cdb7cd40ef970d574ab74-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:17,550 DEBUG [StoreOpener-d372936a1d61b9cd9ca0b4a2fc93afc8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8/f 2023-07-21 05:14:17,550 DEBUG [StoreOpener-d372936a1d61b9cd9ca0b4a2fc93afc8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8/f 2023-07-21 05:14:17,550 INFO [StoreOpener-d372936a1d61b9cd9ca0b4a2fc93afc8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d372936a1d61b9cd9ca0b4a2fc93afc8 columnFamilyName f 2023-07-21 05:14:17,551 DEBUG [StoreOpener-88e6a38d8b6cdb7cd40ef970d574ab74-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74/f 2023-07-21 05:14:17,551 DEBUG [StoreOpener-88e6a38d8b6cdb7cd40ef970d574ab74-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74/f 2023-07-21 05:14:17,551 INFO [StoreOpener-d372936a1d61b9cd9ca0b4a2fc93afc8-1] regionserver.HStore(310): Store=d372936a1d61b9cd9ca0b4a2fc93afc8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:17,552 INFO [StoreOpener-88e6a38d8b6cdb7cd40ef970d574ab74-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 88e6a38d8b6cdb7cd40ef970d574ab74 columnFamilyName f 2023-07-21 05:14:17,553 INFO [StoreOpener-88e6a38d8b6cdb7cd40ef970d574ab74-1] regionserver.HStore(310): Store=88e6a38d8b6cdb7cd40ef970d574ab74/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:17,553 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:17,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:17,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:17,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:17,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:17,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:17,572 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:17,572 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 88e6a38d8b6cdb7cd40ef970d574ab74; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11575090240, jitterRate=0.07801428437232971}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:17,573 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 88e6a38d8b6cdb7cd40ef970d574ab74: 2023-07-21 05:14:17,573 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:17,574 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74., pid=22, masterSystemTime=1689916457538 2023-07-21 05:14:17,574 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d372936a1d61b9cd9ca0b4a2fc93afc8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9701327680, jitterRate=-0.09649345278739929}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:17,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d372936a1d61b9cd9ca0b4a2fc93afc8: 2023-07-21 05:14:17,576 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8., pid=21, masterSystemTime=1689916457536 2023-07-21 05:14:17,579 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=88e6a38d8b6cdb7cd40ef970d574ab74, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:17,579 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916457579"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916457579"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916457579"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916457579"}]},"ts":"1689916457579"} 2023-07-21 05:14:17,580 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:17,580 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:17,580 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:17,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5a58cb9338d5d9cf76f8851475b32701, NAME => 'Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 05:14:17,581 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=d372936a1d61b9cd9ca0b4a2fc93afc8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:17,582 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916457581"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916457581"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916457581"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916457581"}]},"ts":"1689916457581"} 2023-07-21 05:14:17,582 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:17,582 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:17,583 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:17,583 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:17,585 INFO [StoreOpener-5a58cb9338d5d9cf76f8851475b32701-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:17,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:17,587 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:17,587 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:17,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0a1a58c05c04c5c95f8bffa53fef4742, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 05:14:17,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:17,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:17,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:17,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:17,590 INFO [StoreOpener-0a1a58c05c04c5c95f8bffa53fef4742-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:17,592 DEBUG [StoreOpener-5a58cb9338d5d9cf76f8851475b32701-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701/f 2023-07-21 05:14:17,592 DEBUG [StoreOpener-5a58cb9338d5d9cf76f8851475b32701-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701/f 2023-07-21 05:14:17,592 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=20 2023-07-21 05:14:17,593 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=20, state=SUCCESS; OpenRegionProcedure 88e6a38d8b6cdb7cd40ef970d574ab74, server=jenkins-hbase4.apache.org,42315,1689916451166 in 223 msec 2023-07-21 05:14:17,593 INFO [StoreOpener-5a58cb9338d5d9cf76f8851475b32701-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5a58cb9338d5d9cf76f8851475b32701 columnFamilyName f 2023-07-21 05:14:17,593 DEBUG [StoreOpener-0a1a58c05c04c5c95f8bffa53fef4742-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742/f 2023-07-21 05:14:17,593 DEBUG [StoreOpener-0a1a58c05c04c5c95f8bffa53fef4742-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742/f 2023-07-21 05:14:17,594 INFO [StoreOpener-0a1a58c05c04c5c95f8bffa53fef4742-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0a1a58c05c04c5c95f8bffa53fef4742 columnFamilyName f 2023-07-21 05:14:17,595 INFO [StoreOpener-0a1a58c05c04c5c95f8bffa53fef4742-1] regionserver.HStore(310): Store=0a1a58c05c04c5c95f8bffa53fef4742/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:17,595 INFO [StoreOpener-5a58cb9338d5d9cf76f8851475b32701-1] regionserver.HStore(310): Store=5a58cb9338d5d9cf76f8851475b32701/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:17,597 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:17,597 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88e6a38d8b6cdb7cd40ef970d574ab74, ASSIGN in 398 msec 2023-07-21 05:14:17,597 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=18 2023-07-21 05:14:17,599 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=18, state=SUCCESS; OpenRegionProcedure d372936a1d61b9cd9ca0b4a2fc93afc8, server=jenkins-hbase4.apache.org,42093,1689916451283 in 227 msec 2023-07-21 05:14:17,599 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d372936a1d61b9cd9ca0b4a2fc93afc8, ASSIGN in 402 msec 2023-07-21 05:14:17,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:17,603 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:17,604 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:17,608 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:17,610 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:17,613 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:17,614 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5a58cb9338d5d9cf76f8851475b32701; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9972828480, jitterRate=-0.07120797038078308}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:17,614 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5a58cb9338d5d9cf76f8851475b32701: 2023-07-21 05:14:17,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:17,623 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701., pid=25, masterSystemTime=1689916457536 2023-07-21 05:14:17,624 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0a1a58c05c04c5c95f8bffa53fef4742; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10135866240, jitterRate=-0.05602389574050903}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:17,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0a1a58c05c04c5c95f8bffa53fef4742: 2023-07-21 05:14:17,627 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:17,627 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:17,627 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:17,627 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9c7cd237c42cfc0e0416c47e641026bb, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 05:14:17,628 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:17,628 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:17,628 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:17,628 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:17,629 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742., pid=24, masterSystemTime=1689916457538 2023-07-21 05:14:17,631 INFO [StoreOpener-9c7cd237c42cfc0e0416c47e641026bb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:17,633 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=5a58cb9338d5d9cf76f8851475b32701, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:17,634 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916457633"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916457633"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916457633"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916457633"}]},"ts":"1689916457633"} 2023-07-21 05:14:17,634 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:17,635 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:17,635 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=0a1a58c05c04c5c95f8bffa53fef4742, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:17,644 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916457635"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916457635"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916457635"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916457635"}]},"ts":"1689916457635"} 2023-07-21 05:14:17,644 DEBUG [StoreOpener-9c7cd237c42cfc0e0416c47e641026bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb/f 2023-07-21 05:14:17,645 DEBUG [StoreOpener-9c7cd237c42cfc0e0416c47e641026bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb/f 2023-07-21 05:14:17,646 INFO [StoreOpener-9c7cd237c42cfc0e0416c47e641026bb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9c7cd237c42cfc0e0416c47e641026bb columnFamilyName f 2023-07-21 05:14:17,646 INFO [StoreOpener-9c7cd237c42cfc0e0416c47e641026bb-1] regionserver.HStore(310): Store=9c7cd237c42cfc0e0416c47e641026bb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:17,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:17,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:17,655 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=16 2023-07-21 05:14:17,655 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=16, state=SUCCESS; OpenRegionProcedure 5a58cb9338d5d9cf76f8851475b32701, server=jenkins-hbase4.apache.org,42093,1689916451283 in 261 msec 2023-07-21 05:14:17,656 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=17 2023-07-21 05:14:17,656 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=17, state=SUCCESS; OpenRegionProcedure 0a1a58c05c04c5c95f8bffa53fef4742, server=jenkins-hbase4.apache.org,42315,1689916451166 in 279 msec 2023-07-21 05:14:17,658 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5a58cb9338d5d9cf76f8851475b32701, ASSIGN in 460 msec 2023-07-21 05:14:17,659 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a1a58c05c04c5c95f8bffa53fef4742, ASSIGN in 461 msec 2023-07-21 05:14:17,662 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:17,666 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:17,667 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9c7cd237c42cfc0e0416c47e641026bb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10711836000, jitterRate=-0.002382531762123108}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:17,667 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9c7cd237c42cfc0e0416c47e641026bb: 2023-07-21 05:14:17,668 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb., pid=23, masterSystemTime=1689916457536 2023-07-21 05:14:17,670 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:17,670 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:17,671 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=9c7cd237c42cfc0e0416c47e641026bb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:17,671 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916457671"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916457671"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916457671"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916457671"}]},"ts":"1689916457671"} 2023-07-21 05:14:17,676 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=19 2023-07-21 05:14:17,676 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=19, state=SUCCESS; OpenRegionProcedure 9c7cd237c42cfc0e0416c47e641026bb, server=jenkins-hbase4.apache.org,42093,1689916451283 in 310 msec 2023-07-21 05:14:17,680 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=15 2023-07-21 05:14:17,681 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9c7cd237c42cfc0e0416c47e641026bb, ASSIGN in 481 msec 2023-07-21 05:14:17,683 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 05:14:17,683 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916457683"}]},"ts":"1689916457683"} 2023-07-21 05:14:17,692 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-21 05:14:17,699 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 05:14:17,704 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 880 msec 2023-07-21 05:14:17,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 05:14:17,958 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 15 completed 2023-07-21 05:14:17,959 DEBUG [Listener at localhost/34619] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-21 05:14:17,960 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:17,969 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-21 05:14:17,970 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:17,972 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-21 05:14:17,973 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:17,980 DEBUG [Listener at localhost/34619] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 05:14:17,985 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59868, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 05:14:17,988 DEBUG [Listener at localhost/34619] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 05:14:17,996 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57360, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 05:14:17,997 DEBUG [Listener at localhost/34619] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 05:14:18,009 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50576, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 05:14:18,011 DEBUG [Listener at localhost/34619] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 05:14:18,013 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50710, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 05:14:18,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:18,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 05:14:18,026 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:18,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:18,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:18,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:18,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:18,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:18,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:18,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(345): Moving region 5a58cb9338d5d9cf76f8851475b32701 to RSGroup Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:18,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:18,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:18,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:18,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:18,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:18,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5a58cb9338d5d9cf76f8851475b32701, REOPEN/MOVE 2023-07-21 05:14:18,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(345): Moving region 0a1a58c05c04c5c95f8bffa53fef4742 to RSGroup Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:18,049 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5a58cb9338d5d9cf76f8851475b32701, REOPEN/MOVE 2023-07-21 05:14:18,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:18,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:18,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:18,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:18,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:18,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a1a58c05c04c5c95f8bffa53fef4742, REOPEN/MOVE 2023-07-21 05:14:18,052 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=5a58cb9338d5d9cf76f8851475b32701, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:18,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(345): Moving region d372936a1d61b9cd9ca0b4a2fc93afc8 to RSGroup Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:18,053 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a1a58c05c04c5c95f8bffa53fef4742, REOPEN/MOVE 2023-07-21 05:14:18,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:18,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:18,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:18,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:18,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:18,054 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916458052"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916458052"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916458052"}]},"ts":"1689916458052"} 2023-07-21 05:14:18,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d372936a1d61b9cd9ca0b4a2fc93afc8, REOPEN/MOVE 2023-07-21 05:14:18,056 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=0a1a58c05c04c5c95f8bffa53fef4742, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:18,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(345): Moving region 9c7cd237c42cfc0e0416c47e641026bb to RSGroup Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:18,056 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916458056"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916458056"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916458056"}]},"ts":"1689916458056"} 2023-07-21 05:14:18,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:18,057 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d372936a1d61b9cd9ca0b4a2fc93afc8, REOPEN/MOVE 2023-07-21 05:14:18,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:18,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:18,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:18,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:18,058 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=d372936a1d61b9cd9ca0b4a2fc93afc8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:18,059 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916458058"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916458058"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916458058"}]},"ts":"1689916458058"} 2023-07-21 05:14:18,059 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=26, state=RUNNABLE; CloseRegionProcedure 5a58cb9338d5d9cf76f8851475b32701, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:18,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9c7cd237c42cfc0e0416c47e641026bb, REOPEN/MOVE 2023-07-21 05:14:18,060 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9c7cd237c42cfc0e0416c47e641026bb, REOPEN/MOVE 2023-07-21 05:14:18,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(345): Moving region 88e6a38d8b6cdb7cd40ef970d574ab74 to RSGroup Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:18,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:18,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:18,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:18,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:18,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:18,062 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=27, state=RUNNABLE; CloseRegionProcedure 0a1a58c05c04c5c95f8bffa53fef4742, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:18,062 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=9c7cd237c42cfc0e0416c47e641026bb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:18,063 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916458062"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916458062"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916458062"}]},"ts":"1689916458062"} 2023-07-21 05:14:18,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88e6a38d8b6cdb7cd40ef970d574ab74, REOPEN/MOVE 2023-07-21 05:14:18,069 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88e6a38d8b6cdb7cd40ef970d574ab74, REOPEN/MOVE 2023-07-21 05:14:18,070 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=28, state=RUNNABLE; CloseRegionProcedure d372936a1d61b9cd9ca0b4a2fc93afc8, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:18,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1156714162, current retry=0 2023-07-21 05:14:18,072 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=88e6a38d8b6cdb7cd40ef970d574ab74, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:18,072 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916458071"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916458071"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916458071"}]},"ts":"1689916458071"} 2023-07-21 05:14:18,072 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=29, state=RUNNABLE; CloseRegionProcedure 9c7cd237c42cfc0e0416c47e641026bb, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:18,077 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=31, state=RUNNABLE; CloseRegionProcedure 88e6a38d8b6cdb7cd40ef970d574ab74, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:18,239 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:18,247 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5a58cb9338d5d9cf76f8851475b32701, disabling compactions & flushes 2023-07-21 05:14:18,248 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:18,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:18,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. after waiting 0 ms 2023-07-21 05:14:18,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:18,249 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:18,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0a1a58c05c04c5c95f8bffa53fef4742, disabling compactions & flushes 2023-07-21 05:14:18,250 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:18,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:18,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. after waiting 0 ms 2023-07-21 05:14:18,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:18,268 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:18,271 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:18,271 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5a58cb9338d5d9cf76f8851475b32701: 2023-07-21 05:14:18,271 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5a58cb9338d5d9cf76f8851475b32701 move to jenkins-hbase4.apache.org,33541,1689916455330 record at close sequenceid=2 2023-07-21 05:14:18,275 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:18,275 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:18,287 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9c7cd237c42cfc0e0416c47e641026bb, disabling compactions & flushes 2023-07-21 05:14:18,288 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:18,288 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:18,288 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. after waiting 0 ms 2023-07-21 05:14:18,288 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:18,290 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=5a58cb9338d5d9cf76f8851475b32701, regionState=CLOSED 2023-07-21 05:14:18,290 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916458290"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916458290"}]},"ts":"1689916458290"} 2023-07-21 05:14:18,294 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:18,298 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:18,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0a1a58c05c04c5c95f8bffa53fef4742: 2023-07-21 05:14:18,299 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 0a1a58c05c04c5c95f8bffa53fef4742 move to jenkins-hbase4.apache.org,33541,1689916455330 record at close sequenceid=2 2023-07-21 05:14:18,306 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:18,307 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:18,307 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=0a1a58c05c04c5c95f8bffa53fef4742, regionState=CLOSED 2023-07-21 05:14:18,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 88e6a38d8b6cdb7cd40ef970d574ab74, disabling compactions & flushes 2023-07-21 05:14:18,308 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:18,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:18,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. after waiting 0 ms 2023-07-21 05:14:18,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:18,311 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916458307"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916458307"}]},"ts":"1689916458307"} 2023-07-21 05:14:18,313 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=26 2023-07-21 05:14:18,313 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=26, state=SUCCESS; CloseRegionProcedure 5a58cb9338d5d9cf76f8851475b32701, server=jenkins-hbase4.apache.org,42093,1689916451283 in 236 msec 2023-07-21 05:14:18,315 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5a58cb9338d5d9cf76f8851475b32701, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33541,1689916455330; forceNewPlan=false, retain=false 2023-07-21 05:14:18,323 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:18,326 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:18,326 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:18,326 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9c7cd237c42cfc0e0416c47e641026bb: 2023-07-21 05:14:18,326 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9c7cd237c42cfc0e0416c47e641026bb move to jenkins-hbase4.apache.org,40677,1689916451367 record at close sequenceid=2 2023-07-21 05:14:18,327 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:18,327 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 88e6a38d8b6cdb7cd40ef970d574ab74: 2023-07-21 05:14:18,327 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 88e6a38d8b6cdb7cd40ef970d574ab74 move to jenkins-hbase4.apache.org,33541,1689916455330 record at close sequenceid=2 2023-07-21 05:14:18,328 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=27 2023-07-21 05:14:18,329 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=27, state=SUCCESS; CloseRegionProcedure 0a1a58c05c04c5c95f8bffa53fef4742, server=jenkins-hbase4.apache.org,42315,1689916451166 in 252 msec 2023-07-21 05:14:18,331 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a1a58c05c04c5c95f8bffa53fef4742, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33541,1689916455330; forceNewPlan=false, retain=false 2023-07-21 05:14:18,331 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=9c7cd237c42cfc0e0416c47e641026bb, regionState=CLOSED 2023-07-21 05:14:18,331 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916458331"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916458331"}]},"ts":"1689916458331"} 2023-07-21 05:14:18,334 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:18,334 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:18,334 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:18,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d372936a1d61b9cd9ca0b4a2fc93afc8, disabling compactions & flushes 2023-07-21 05:14:18,335 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:18,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:18,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. after waiting 0 ms 2023-07-21 05:14:18,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:18,340 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=88e6a38d8b6cdb7cd40ef970d574ab74, regionState=CLOSED 2023-07-21 05:14:18,340 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916458339"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916458339"}]},"ts":"1689916458339"} 2023-07-21 05:14:18,346 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:18,349 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=29 2023-07-21 05:14:18,349 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=29, state=SUCCESS; CloseRegionProcedure 9c7cd237c42cfc0e0416c47e641026bb, server=jenkins-hbase4.apache.org,42093,1689916451283 in 268 msec 2023-07-21 05:14:18,350 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=31 2023-07-21 05:14:18,350 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:18,351 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=31, state=SUCCESS; CloseRegionProcedure 88e6a38d8b6cdb7cd40ef970d574ab74, server=jenkins-hbase4.apache.org,42315,1689916451166 in 268 msec 2023-07-21 05:14:18,351 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9c7cd237c42cfc0e0416c47e641026bb, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40677,1689916451367; forceNewPlan=false, retain=false 2023-07-21 05:14:18,351 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d372936a1d61b9cd9ca0b4a2fc93afc8: 2023-07-21 05:14:18,351 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding d372936a1d61b9cd9ca0b4a2fc93afc8 move to jenkins-hbase4.apache.org,40677,1689916451367 record at close sequenceid=2 2023-07-21 05:14:18,352 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88e6a38d8b6cdb7cd40ef970d574ab74, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33541,1689916455330; forceNewPlan=false, retain=false 2023-07-21 05:14:18,354 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:18,359 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=d372936a1d61b9cd9ca0b4a2fc93afc8, regionState=CLOSED 2023-07-21 05:14:18,359 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916458359"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916458359"}]},"ts":"1689916458359"} 2023-07-21 05:14:18,368 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=28 2023-07-21 05:14:18,370 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=28, state=SUCCESS; CloseRegionProcedure d372936a1d61b9cd9ca0b4a2fc93afc8, server=jenkins-hbase4.apache.org,42093,1689916451283 in 293 msec 2023-07-21 05:14:18,374 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d372936a1d61b9cd9ca0b4a2fc93afc8, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40677,1689916451367; forceNewPlan=false, retain=false 2023-07-21 05:14:18,466 INFO [jenkins-hbase4:42467] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 05:14:18,466 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=d372936a1d61b9cd9ca0b4a2fc93afc8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:18,466 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=88e6a38d8b6cdb7cd40ef970d574ab74, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:18,467 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916458466"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916458466"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916458466"}]},"ts":"1689916458466"} 2023-07-21 05:14:18,466 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=9c7cd237c42cfc0e0416c47e641026bb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:18,467 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916458466"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916458466"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916458466"}]},"ts":"1689916458466"} 2023-07-21 05:14:18,467 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916458466"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916458466"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916458466"}]},"ts":"1689916458466"} 2023-07-21 05:14:18,467 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=0a1a58c05c04c5c95f8bffa53fef4742, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:18,467 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916458467"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916458467"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916458467"}]},"ts":"1689916458467"} 2023-07-21 05:14:18,468 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=5a58cb9338d5d9cf76f8851475b32701, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:18,468 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916458468"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916458468"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916458468"}]},"ts":"1689916458468"} 2023-07-21 05:14:18,470 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=28, state=RUNNABLE; OpenRegionProcedure d372936a1d61b9cd9ca0b4a2fc93afc8, server=jenkins-hbase4.apache.org,40677,1689916451367}] 2023-07-21 05:14:18,474 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=31, state=RUNNABLE; OpenRegionProcedure 88e6a38d8b6cdb7cd40ef970d574ab74, server=jenkins-hbase4.apache.org,33541,1689916455330}] 2023-07-21 05:14:18,476 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=29, state=RUNNABLE; OpenRegionProcedure 9c7cd237c42cfc0e0416c47e641026bb, server=jenkins-hbase4.apache.org,40677,1689916451367}] 2023-07-21 05:14:18,477 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=27, state=RUNNABLE; OpenRegionProcedure 0a1a58c05c04c5c95f8bffa53fef4742, server=jenkins-hbase4.apache.org,33541,1689916455330}] 2023-07-21 05:14:18,479 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=26, state=RUNNABLE; OpenRegionProcedure 5a58cb9338d5d9cf76f8851475b32701, server=jenkins-hbase4.apache.org,33541,1689916455330}] 2023-07-21 05:14:18,627 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:18,627 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 05:14:18,629 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59884, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 05:14:18,634 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:18,635 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9c7cd237c42cfc0e0416c47e641026bb, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 05:14:18,635 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:18,635 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:18,635 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:18,635 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:18,639 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:18,639 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5a58cb9338d5d9cf76f8851475b32701, NAME => 'Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 05:14:18,639 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:18,639 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:18,639 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:18,639 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:18,639 INFO [StoreOpener-9c7cd237c42cfc0e0416c47e641026bb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:18,641 INFO [StoreOpener-5a58cb9338d5d9cf76f8851475b32701-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:18,642 DEBUG [StoreOpener-9c7cd237c42cfc0e0416c47e641026bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb/f 2023-07-21 05:14:18,642 DEBUG [StoreOpener-9c7cd237c42cfc0e0416c47e641026bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb/f 2023-07-21 05:14:18,643 DEBUG [StoreOpener-5a58cb9338d5d9cf76f8851475b32701-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701/f 2023-07-21 05:14:18,643 DEBUG [StoreOpener-5a58cb9338d5d9cf76f8851475b32701-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701/f 2023-07-21 05:14:18,643 INFO [StoreOpener-9c7cd237c42cfc0e0416c47e641026bb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9c7cd237c42cfc0e0416c47e641026bb columnFamilyName f 2023-07-21 05:14:18,644 INFO [StoreOpener-5a58cb9338d5d9cf76f8851475b32701-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5a58cb9338d5d9cf76f8851475b32701 columnFamilyName f 2023-07-21 05:14:18,645 INFO [StoreOpener-9c7cd237c42cfc0e0416c47e641026bb-1] regionserver.HStore(310): Store=9c7cd237c42cfc0e0416c47e641026bb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:18,648 INFO [StoreOpener-5a58cb9338d5d9cf76f8851475b32701-1] regionserver.HStore(310): Store=5a58cb9338d5d9cf76f8851475b32701/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:18,648 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:18,655 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:18,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:18,670 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:18,686 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:18,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:18,692 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5a58cb9338d5d9cf76f8851475b32701; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9933208480, jitterRate=-0.0748978704214096}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:18,693 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5a58cb9338d5d9cf76f8851475b32701: 2023-07-21 05:14:18,695 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9c7cd237c42cfc0e0416c47e641026bb; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11977462080, jitterRate=0.11548808217048645}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:18,695 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9c7cd237c42cfc0e0416c47e641026bb: 2023-07-21 05:14:18,698 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701., pid=40, masterSystemTime=1689916458627 2023-07-21 05:14:18,711 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb., pid=38, masterSystemTime=1689916458626 2023-07-21 05:14:18,712 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:18,714 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:18,718 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:18,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0a1a58c05c04c5c95f8bffa53fef4742, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 05:14:18,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:18,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:18,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:18,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:18,721 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=5a58cb9338d5d9cf76f8851475b32701, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:18,721 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916458721"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916458721"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916458721"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916458721"}]},"ts":"1689916458721"} 2023-07-21 05:14:18,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:18,721 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:18,722 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:18,722 INFO [StoreOpener-0a1a58c05c04c5c95f8bffa53fef4742-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:18,723 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=9c7cd237c42cfc0e0416c47e641026bb, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:18,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d372936a1d61b9cd9ca0b4a2fc93afc8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 05:14:18,724 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916458723"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916458723"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916458723"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916458723"}]},"ts":"1689916458723"} 2023-07-21 05:14:18,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:18,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:18,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:18,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:18,726 DEBUG [StoreOpener-0a1a58c05c04c5c95f8bffa53fef4742-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742/f 2023-07-21 05:14:18,727 DEBUG [StoreOpener-0a1a58c05c04c5c95f8bffa53fef4742-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742/f 2023-07-21 05:14:18,727 INFO [StoreOpener-0a1a58c05c04c5c95f8bffa53fef4742-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0a1a58c05c04c5c95f8bffa53fef4742 columnFamilyName f 2023-07-21 05:14:18,728 INFO [StoreOpener-0a1a58c05c04c5c95f8bffa53fef4742-1] regionserver.HStore(310): Store=0a1a58c05c04c5c95f8bffa53fef4742/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:18,732 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=26 2023-07-21 05:14:18,735 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=26, state=SUCCESS; OpenRegionProcedure 5a58cb9338d5d9cf76f8851475b32701, server=jenkins-hbase4.apache.org,33541,1689916455330 in 248 msec 2023-07-21 05:14:18,736 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:18,736 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=29 2023-07-21 05:14:18,736 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=29, state=SUCCESS; OpenRegionProcedure 9c7cd237c42cfc0e0416c47e641026bb, server=jenkins-hbase4.apache.org,40677,1689916451367 in 253 msec 2023-07-21 05:14:18,737 INFO [StoreOpener-d372936a1d61b9cd9ca0b4a2fc93afc8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:18,744 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5a58cb9338d5d9cf76f8851475b32701, REOPEN/MOVE in 685 msec 2023-07-21 05:14:18,745 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:18,747 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9c7cd237c42cfc0e0416c47e641026bb, REOPEN/MOVE in 679 msec 2023-07-21 05:14:18,748 DEBUG [StoreOpener-d372936a1d61b9cd9ca0b4a2fc93afc8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8/f 2023-07-21 05:14:18,748 DEBUG [StoreOpener-d372936a1d61b9cd9ca0b4a2fc93afc8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8/f 2023-07-21 05:14:18,749 INFO [StoreOpener-d372936a1d61b9cd9ca0b4a2fc93afc8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d372936a1d61b9cd9ca0b4a2fc93afc8 columnFamilyName f 2023-07-21 05:14:18,751 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:18,752 INFO [StoreOpener-d372936a1d61b9cd9ca0b4a2fc93afc8-1] regionserver.HStore(310): Store=d372936a1d61b9cd9ca0b4a2fc93afc8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:18,753 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0a1a58c05c04c5c95f8bffa53fef4742; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11073530560, jitterRate=0.03130289912223816}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:18,753 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:18,753 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0a1a58c05c04c5c95f8bffa53fef4742: 2023-07-21 05:14:18,755 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742., pid=39, masterSystemTime=1689916458627 2023-07-21 05:14:18,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:18,757 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:18,757 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:18,757 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:18,758 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 88e6a38d8b6cdb7cd40ef970d574ab74, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 05:14:18,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:18,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:18,759 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=0a1a58c05c04c5c95f8bffa53fef4742, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:18,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:18,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:18,760 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916458759"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916458759"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916458759"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916458759"}]},"ts":"1689916458759"} 2023-07-21 05:14:18,761 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:18,762 INFO [StoreOpener-88e6a38d8b6cdb7cd40ef970d574ab74-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:18,764 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d372936a1d61b9cd9ca0b4a2fc93afc8; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11366931520, jitterRate=0.05862799286842346}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:18,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d372936a1d61b9cd9ca0b4a2fc93afc8: 2023-07-21 05:14:18,765 DEBUG [StoreOpener-88e6a38d8b6cdb7cd40ef970d574ab74-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74/f 2023-07-21 05:14:18,765 DEBUG [StoreOpener-88e6a38d8b6cdb7cd40ef970d574ab74-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74/f 2023-07-21 05:14:18,765 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8., pid=36, masterSystemTime=1689916458626 2023-07-21 05:14:18,767 INFO [StoreOpener-88e6a38d8b6cdb7cd40ef970d574ab74-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 88e6a38d8b6cdb7cd40ef970d574ab74 columnFamilyName f 2023-07-21 05:14:18,767 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=27 2023-07-21 05:14:18,767 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=27, state=SUCCESS; OpenRegionProcedure 0a1a58c05c04c5c95f8bffa53fef4742, server=jenkins-hbase4.apache.org,33541,1689916455330 in 285 msec 2023-07-21 05:14:18,768 INFO [StoreOpener-88e6a38d8b6cdb7cd40ef970d574ab74-1] regionserver.HStore(310): Store=88e6a38d8b6cdb7cd40ef970d574ab74/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:18,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:18,770 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:18,771 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=d372936a1d61b9cd9ca0b4a2fc93afc8, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:18,771 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916458771"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916458771"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916458771"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916458771"}]},"ts":"1689916458771"} 2023-07-21 05:14:18,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:18,773 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a1a58c05c04c5c95f8bffa53fef4742, REOPEN/MOVE in 717 msec 2023-07-21 05:14:18,779 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:18,784 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=28 2023-07-21 05:14:18,784 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:18,784 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=28, state=SUCCESS; OpenRegionProcedure d372936a1d61b9cd9ca0b4a2fc93afc8, server=jenkins-hbase4.apache.org,40677,1689916451367 in 309 msec 2023-07-21 05:14:18,785 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 88e6a38d8b6cdb7cd40ef970d574ab74; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11331503040, jitterRate=0.05532845854759216}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:18,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 88e6a38d8b6cdb7cd40ef970d574ab74: 2023-07-21 05:14:18,786 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d372936a1d61b9cd9ca0b4a2fc93afc8, REOPEN/MOVE in 731 msec 2023-07-21 05:14:18,786 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74., pid=37, masterSystemTime=1689916458627 2023-07-21 05:14:18,788 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:18,788 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:18,789 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=88e6a38d8b6cdb7cd40ef970d574ab74, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:18,789 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916458789"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916458789"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916458789"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916458789"}]},"ts":"1689916458789"} 2023-07-21 05:14:18,793 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=31 2023-07-21 05:14:18,793 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=31, state=SUCCESS; OpenRegionProcedure 88e6a38d8b6cdb7cd40ef970d574ab74, server=jenkins-hbase4.apache.org,33541,1689916455330 in 318 msec 2023-07-21 05:14:18,796 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=31, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88e6a38d8b6cdb7cd40ef970d574ab74, REOPEN/MOVE in 732 msec 2023-07-21 05:14:19,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure.ProcedureSyncWait(216): waitFor pid=26 2023-07-21 05:14:19,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1156714162. 2023-07-21 05:14:19,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:19,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:19,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:19,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:19,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 05:14:19,080 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:19,088 INFO [Listener at localhost/34619] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:19,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:19,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=41, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:19,106 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916459106"}]},"ts":"1689916459106"} 2023-07-21 05:14:19,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-21 05:14:19,108 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-21 05:14:19,110 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-21 05:14:19,113 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5a58cb9338d5d9cf76f8851475b32701, UNASSIGN}, {pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a1a58c05c04c5c95f8bffa53fef4742, UNASSIGN}, {pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d372936a1d61b9cd9ca0b4a2fc93afc8, UNASSIGN}, {pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9c7cd237c42cfc0e0416c47e641026bb, UNASSIGN}, {pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88e6a38d8b6cdb7cd40ef970d574ab74, UNASSIGN}] 2023-07-21 05:14:19,116 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d372936a1d61b9cd9ca0b4a2fc93afc8, UNASSIGN 2023-07-21 05:14:19,116 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a1a58c05c04c5c95f8bffa53fef4742, UNASSIGN 2023-07-21 05:14:19,116 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5a58cb9338d5d9cf76f8851475b32701, UNASSIGN 2023-07-21 05:14:19,117 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9c7cd237c42cfc0e0416c47e641026bb, UNASSIGN 2023-07-21 05:14:19,117 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88e6a38d8b6cdb7cd40ef970d574ab74, UNASSIGN 2023-07-21 05:14:19,118 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=d372936a1d61b9cd9ca0b4a2fc93afc8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:19,118 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=9c7cd237c42cfc0e0416c47e641026bb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:19,118 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916459118"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916459118"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916459118"}]},"ts":"1689916459118"} 2023-07-21 05:14:19,118 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=0a1a58c05c04c5c95f8bffa53fef4742, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:19,118 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=5a58cb9338d5d9cf76f8851475b32701, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:19,118 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916459118"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916459118"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916459118"}]},"ts":"1689916459118"} 2023-07-21 05:14:19,118 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916459118"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916459118"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916459118"}]},"ts":"1689916459118"} 2023-07-21 05:14:19,119 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916459118"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916459118"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916459118"}]},"ts":"1689916459118"} 2023-07-21 05:14:19,120 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=88e6a38d8b6cdb7cd40ef970d574ab74, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:19,120 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916459120"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916459120"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916459120"}]},"ts":"1689916459120"} 2023-07-21 05:14:19,120 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=44, state=RUNNABLE; CloseRegionProcedure d372936a1d61b9cd9ca0b4a2fc93afc8, server=jenkins-hbase4.apache.org,40677,1689916451367}] 2023-07-21 05:14:19,122 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=43, state=RUNNABLE; CloseRegionProcedure 0a1a58c05c04c5c95f8bffa53fef4742, server=jenkins-hbase4.apache.org,33541,1689916455330}] 2023-07-21 05:14:19,122 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=42, state=RUNNABLE; CloseRegionProcedure 5a58cb9338d5d9cf76f8851475b32701, server=jenkins-hbase4.apache.org,33541,1689916455330}] 2023-07-21 05:14:19,124 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=45, state=RUNNABLE; CloseRegionProcedure 9c7cd237c42cfc0e0416c47e641026bb, server=jenkins-hbase4.apache.org,40677,1689916451367}] 2023-07-21 05:14:19,125 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=46, state=RUNNABLE; CloseRegionProcedure 88e6a38d8b6cdb7cd40ef970d574ab74, server=jenkins-hbase4.apache.org,33541,1689916455330}] 2023-07-21 05:14:19,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-21 05:14:19,221 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 05:14:19,273 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:19,274 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d372936a1d61b9cd9ca0b4a2fc93afc8, disabling compactions & flushes 2023-07-21 05:14:19,275 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:19,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:19,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. after waiting 0 ms 2023-07-21 05:14:19,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:19,277 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:19,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0a1a58c05c04c5c95f8bffa53fef4742, disabling compactions & flushes 2023-07-21 05:14:19,280 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:19,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:19,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. after waiting 0 ms 2023-07-21 05:14:19,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:19,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 05:14:19,291 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8. 2023-07-21 05:14:19,291 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d372936a1d61b9cd9ca0b4a2fc93afc8: 2023-07-21 05:14:19,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 05:14:19,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742. 2023-07-21 05:14:19,295 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0a1a58c05c04c5c95f8bffa53fef4742: 2023-07-21 05:14:19,297 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 05:14:19,297 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:19,297 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:19,297 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9c7cd237c42cfc0e0416c47e641026bb, disabling compactions & flushes 2023-07-21 05:14:19,298 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:19,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:19,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. after waiting 0 ms 2023-07-21 05:14:19,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:19,298 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=d372936a1d61b9cd9ca0b4a2fc93afc8, regionState=CLOSED 2023-07-21 05:14:19,298 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916459298"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916459298"}]},"ts":"1689916459298"} 2023-07-21 05:14:19,298 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:19,299 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:19,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 88e6a38d8b6cdb7cd40ef970d574ab74, disabling compactions & flushes 2023-07-21 05:14:19,299 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:19,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:19,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. after waiting 0 ms 2023-07-21 05:14:19,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:19,300 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 05:14:19,300 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=0a1a58c05c04c5c95f8bffa53fef4742, regionState=CLOSED 2023-07-21 05:14:19,300 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916459300"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916459300"}]},"ts":"1689916459300"} 2023-07-21 05:14:19,301 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-21 05:14:19,303 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 05:14:19,304 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-21 05:14:19,304 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 05:14:19,304 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-21 05:14:19,305 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 05:14:19,305 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-21 05:14:19,308 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=44 2023-07-21 05:14:19,308 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=44, state=SUCCESS; CloseRegionProcedure d372936a1d61b9cd9ca0b4a2fc93afc8, server=jenkins-hbase4.apache.org,40677,1689916451367 in 182 msec 2023-07-21 05:14:19,309 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=43 2023-07-21 05:14:19,309 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=43, state=SUCCESS; CloseRegionProcedure 0a1a58c05c04c5c95f8bffa53fef4742, server=jenkins-hbase4.apache.org,33541,1689916455330 in 183 msec 2023-07-21 05:14:19,310 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d372936a1d61b9cd9ca0b4a2fc93afc8, UNASSIGN in 195 msec 2023-07-21 05:14:19,311 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0a1a58c05c04c5c95f8bffa53fef4742, UNASSIGN in 196 msec 2023-07-21 05:14:19,317 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 05:14:19,318 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 05:14:19,319 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb. 2023-07-21 05:14:19,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9c7cd237c42cfc0e0416c47e641026bb: 2023-07-21 05:14:19,322 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74. 2023-07-21 05:14:19,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 88e6a38d8b6cdb7cd40ef970d574ab74: 2023-07-21 05:14:19,322 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:19,324 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=9c7cd237c42cfc0e0416c47e641026bb, regionState=CLOSED 2023-07-21 05:14:19,324 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916459324"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916459324"}]},"ts":"1689916459324"} 2023-07-21 05:14:19,326 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:19,326 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:19,327 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5a58cb9338d5d9cf76f8851475b32701, disabling compactions & flushes 2023-07-21 05:14:19,327 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:19,327 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:19,327 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. after waiting 0 ms 2023-07-21 05:14:19,327 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:19,328 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=88e6a38d8b6cdb7cd40ef970d574ab74, regionState=CLOSED 2023-07-21 05:14:19,328 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916459328"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916459328"}]},"ts":"1689916459328"} 2023-07-21 05:14:19,334 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=45 2023-07-21 05:14:19,335 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=45, state=SUCCESS; CloseRegionProcedure 9c7cd237c42cfc0e0416c47e641026bb, server=jenkins-hbase4.apache.org,40677,1689916451367 in 204 msec 2023-07-21 05:14:19,337 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 05:14:19,339 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701. 2023-07-21 05:14:19,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5a58cb9338d5d9cf76f8851475b32701: 2023-07-21 05:14:19,341 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9c7cd237c42cfc0e0416c47e641026bb, UNASSIGN in 222 msec 2023-07-21 05:14:19,341 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:19,341 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=46 2023-07-21 05:14:19,341 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=46, state=SUCCESS; CloseRegionProcedure 88e6a38d8b6cdb7cd40ef970d574ab74, server=jenkins-hbase4.apache.org,33541,1689916455330 in 206 msec 2023-07-21 05:14:19,343 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=5a58cb9338d5d9cf76f8851475b32701, regionState=CLOSED 2023-07-21 05:14:19,343 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916459343"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916459343"}]},"ts":"1689916459343"} 2023-07-21 05:14:19,345 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88e6a38d8b6cdb7cd40ef970d574ab74, UNASSIGN in 228 msec 2023-07-21 05:14:19,349 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=42 2023-07-21 05:14:19,349 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=42, state=SUCCESS; CloseRegionProcedure 5a58cb9338d5d9cf76f8851475b32701, server=jenkins-hbase4.apache.org,33541,1689916455330 in 223 msec 2023-07-21 05:14:19,355 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=41 2023-07-21 05:14:19,355 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5a58cb9338d5d9cf76f8851475b32701, UNASSIGN in 236 msec 2023-07-21 05:14:19,357 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916459356"}]},"ts":"1689916459356"} 2023-07-21 05:14:19,359 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-21 05:14:19,361 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-21 05:14:19,365 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=41, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 267 msec 2023-07-21 05:14:19,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-21 05:14:19,411 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 41 completed 2023-07-21 05:14:19,413 INFO [Listener at localhost/34619] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:19,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:19,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=52, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-21 05:14:19,435 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-21 05:14:19,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-21 05:14:19,449 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:19,449 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:19,449 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:19,449 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:19,449 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:19,454 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742/recovered.edits] 2023-07-21 05:14:19,454 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb/recovered.edits] 2023-07-21 05:14:19,455 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8/recovered.edits] 2023-07-21 05:14:19,454 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74/recovered.edits] 2023-07-21 05:14:19,455 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701/recovered.edits] 2023-07-21 05:14:19,483 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701/recovered.edits/7.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701/recovered.edits/7.seqid 2023-07-21 05:14:19,484 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5a58cb9338d5d9cf76f8851475b32701 2023-07-21 05:14:19,490 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74/recovered.edits/7.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74/recovered.edits/7.seqid 2023-07-21 05:14:19,490 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8/recovered.edits/7.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8/recovered.edits/7.seqid 2023-07-21 05:14:19,491 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88e6a38d8b6cdb7cd40ef970d574ab74 2023-07-21 05:14:19,492 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742/recovered.edits/7.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742/recovered.edits/7.seqid 2023-07-21 05:14:19,493 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb/recovered.edits/7.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb/recovered.edits/7.seqid 2023-07-21 05:14:19,494 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0a1a58c05c04c5c95f8bffa53fef4742 2023-07-21 05:14:19,494 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d372936a1d61b9cd9ca0b4a2fc93afc8 2023-07-21 05:14:19,495 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9c7cd237c42cfc0e0416c47e641026bb 2023-07-21 05:14:19,495 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 05:14:19,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-21 05:14:19,540 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-21 05:14:19,549 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-21 05:14:19,550 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-21 05:14:19,550 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916459550"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:19,550 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916459550"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:19,550 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916459550"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:19,551 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916459550"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:19,551 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916459550"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:19,554 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 05:14:19,554 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 5a58cb9338d5d9cf76f8851475b32701, NAME => 'Group_testTableMoveTruncateAndDrop,,1689916456815.5a58cb9338d5d9cf76f8851475b32701.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 0a1a58c05c04c5c95f8bffa53fef4742, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689916456815.0a1a58c05c04c5c95f8bffa53fef4742.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => d372936a1d61b9cd9ca0b4a2fc93afc8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916456815.d372936a1d61b9cd9ca0b4a2fc93afc8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 9c7cd237c42cfc0e0416c47e641026bb, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916456815.9c7cd237c42cfc0e0416c47e641026bb.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 88e6a38d8b6cdb7cd40ef970d574ab74, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689916456815.88e6a38d8b6cdb7cd40ef970d574ab74.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 05:14:19,554 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-21 05:14:19,554 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689916459554"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:19,559 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-21 05:14:19,569 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8661a56845e86934e0c495f3a00e284c 2023-07-21 05:14:19,569 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3a3e96c0d83cc4f475b4caf12263f289 2023-07-21 05:14:19,569 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dade82b17d8e96876b676eb8915347f4 2023-07-21 05:14:19,569 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/26b88d4eba538c77983846412998755e 2023-07-21 05:14:19,569 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01237b35b0dfaf916d9d99e78d7d98c8 2023-07-21 05:14:19,570 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3a3e96c0d83cc4f475b4caf12263f289 empty. 2023-07-21 05:14:19,570 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01237b35b0dfaf916d9d99e78d7d98c8 empty. 2023-07-21 05:14:19,571 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/26b88d4eba538c77983846412998755e empty. 2023-07-21 05:14:19,571 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dade82b17d8e96876b676eb8915347f4 empty. 2023-07-21 05:14:19,571 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8661a56845e86934e0c495f3a00e284c empty. 2023-07-21 05:14:19,572 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/26b88d4eba538c77983846412998755e 2023-07-21 05:14:19,572 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01237b35b0dfaf916d9d99e78d7d98c8 2023-07-21 05:14:19,572 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8661a56845e86934e0c495f3a00e284c 2023-07-21 05:14:19,572 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3a3e96c0d83cc4f475b4caf12263f289 2023-07-21 05:14:19,573 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dade82b17d8e96876b676eb8915347f4 2023-07-21 05:14:19,573 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 05:14:19,607 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:19,611 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8661a56845e86934e0c495f3a00e284c, NAME => 'Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:19,615 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 01237b35b0dfaf916d9d99e78d7d98c8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:19,615 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 26b88d4eba538c77983846412998755e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:19,663 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:19,663 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 8661a56845e86934e0c495f3a00e284c, disabling compactions & flushes 2023-07-21 05:14:19,663 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c. 2023-07-21 05:14:19,663 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c. 2023-07-21 05:14:19,663 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c. after waiting 0 ms 2023-07-21 05:14:19,663 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c. 2023-07-21 05:14:19,663 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c. 2023-07-21 05:14:19,664 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 8661a56845e86934e0c495f3a00e284c: 2023-07-21 05:14:19,664 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => dade82b17d8e96876b676eb8915347f4, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:19,693 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:19,693 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 26b88d4eba538c77983846412998755e, disabling compactions & flushes 2023-07-21 05:14:19,694 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e. 2023-07-21 05:14:19,694 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e. 2023-07-21 05:14:19,694 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e. after waiting 0 ms 2023-07-21 05:14:19,694 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e. 2023-07-21 05:14:19,694 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e. 2023-07-21 05:14:19,694 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 26b88d4eba538c77983846412998755e: 2023-07-21 05:14:19,694 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 3a3e96c0d83cc4f475b4caf12263f289, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:19,700 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:19,701 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing dade82b17d8e96876b676eb8915347f4, disabling compactions & flushes 2023-07-21 05:14:19,701 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4. 2023-07-21 05:14:19,701 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4. 2023-07-21 05:14:19,701 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4. after waiting 0 ms 2023-07-21 05:14:19,701 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4. 2023-07-21 05:14:19,701 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4. 2023-07-21 05:14:19,701 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for dade82b17d8e96876b676eb8915347f4: 2023-07-21 05:14:19,712 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:19,712 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 3a3e96c0d83cc4f475b4caf12263f289, disabling compactions & flushes 2023-07-21 05:14:19,712 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289. 2023-07-21 05:14:19,712 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289. 2023-07-21 05:14:19,712 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289. after waiting 0 ms 2023-07-21 05:14:19,712 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289. 2023-07-21 05:14:19,712 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289. 2023-07-21 05:14:19,712 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 3a3e96c0d83cc4f475b4caf12263f289: 2023-07-21 05:14:19,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-21 05:14:20,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-21 05:14:20,086 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:20,086 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 01237b35b0dfaf916d9d99e78d7d98c8, disabling compactions & flushes 2023-07-21 05:14:20,086 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8. 2023-07-21 05:14:20,086 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8. 2023-07-21 05:14:20,086 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8. after waiting 0 ms 2023-07-21 05:14:20,086 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8. 2023-07-21 05:14:20,086 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8. 2023-07-21 05:14:20,086 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 01237b35b0dfaf916d9d99e78d7d98c8: 2023-07-21 05:14:20,092 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916460092"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916460092"}]},"ts":"1689916460092"} 2023-07-21 05:14:20,092 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916460092"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916460092"}]},"ts":"1689916460092"} 2023-07-21 05:14:20,092 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916460092"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916460092"}]},"ts":"1689916460092"} 2023-07-21 05:14:20,092 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916460092"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916460092"}]},"ts":"1689916460092"} 2023-07-21 05:14:20,092 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916460092"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916460092"}]},"ts":"1689916460092"} 2023-07-21 05:14:20,096 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 05:14:20,098 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916460097"}]},"ts":"1689916460097"} 2023-07-21 05:14:20,100 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-21 05:14:20,105 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:20,105 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:20,105 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:20,105 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:20,108 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8661a56845e86934e0c495f3a00e284c, ASSIGN}, {pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=26b88d4eba538c77983846412998755e, ASSIGN}, {pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01237b35b0dfaf916d9d99e78d7d98c8, ASSIGN}, {pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dade82b17d8e96876b676eb8915347f4, ASSIGN}, {pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3a3e96c0d83cc4f475b4caf12263f289, ASSIGN}] 2023-07-21 05:14:20,110 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01237b35b0dfaf916d9d99e78d7d98c8, ASSIGN 2023-07-21 05:14:20,111 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8661a56845e86934e0c495f3a00e284c, ASSIGN 2023-07-21 05:14:20,111 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3a3e96c0d83cc4f475b4caf12263f289, ASSIGN 2023-07-21 05:14:20,111 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dade82b17d8e96876b676eb8915347f4, ASSIGN 2023-07-21 05:14:20,111 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=26b88d4eba538c77983846412998755e, ASSIGN 2023-07-21 05:14:20,112 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01237b35b0dfaf916d9d99e78d7d98c8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40677,1689916451367; forceNewPlan=false, retain=false 2023-07-21 05:14:20,112 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8661a56845e86934e0c495f3a00e284c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40677,1689916451367; forceNewPlan=false, retain=false 2023-07-21 05:14:20,112 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3a3e96c0d83cc4f475b4caf12263f289, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33541,1689916455330; forceNewPlan=false, retain=false 2023-07-21 05:14:20,112 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dade82b17d8e96876b676eb8915347f4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40677,1689916451367; forceNewPlan=false, retain=false 2023-07-21 05:14:20,112 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=26b88d4eba538c77983846412998755e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33541,1689916455330; forceNewPlan=false, retain=false 2023-07-21 05:14:20,262 INFO [jenkins-hbase4:42467] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 05:14:20,266 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=8661a56845e86934e0c495f3a00e284c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:20,266 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=01237b35b0dfaf916d9d99e78d7d98c8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:20,266 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=dade82b17d8e96876b676eb8915347f4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:20,266 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=3a3e96c0d83cc4f475b4caf12263f289, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:20,266 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=26b88d4eba538c77983846412998755e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:20,266 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916460265"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916460265"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916460265"}]},"ts":"1689916460265"} 2023-07-21 05:14:20,266 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916460266"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916460266"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916460266"}]},"ts":"1689916460266"} 2023-07-21 05:14:20,266 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916460265"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916460265"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916460265"}]},"ts":"1689916460265"} 2023-07-21 05:14:20,266 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916460266"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916460266"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916460266"}]},"ts":"1689916460266"} 2023-07-21 05:14:20,266 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916460266"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916460266"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916460266"}]},"ts":"1689916460266"} 2023-07-21 05:14:20,268 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=56, state=RUNNABLE; OpenRegionProcedure dade82b17d8e96876b676eb8915347f4, server=jenkins-hbase4.apache.org,40677,1689916451367}] 2023-07-21 05:14:20,270 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=55, state=RUNNABLE; OpenRegionProcedure 01237b35b0dfaf916d9d99e78d7d98c8, server=jenkins-hbase4.apache.org,40677,1689916451367}] 2023-07-21 05:14:20,275 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=57, state=RUNNABLE; OpenRegionProcedure 3a3e96c0d83cc4f475b4caf12263f289, server=jenkins-hbase4.apache.org,33541,1689916455330}] 2023-07-21 05:14:20,278 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=53, state=RUNNABLE; OpenRegionProcedure 8661a56845e86934e0c495f3a00e284c, server=jenkins-hbase4.apache.org,40677,1689916451367}] 2023-07-21 05:14:20,279 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=54, state=RUNNABLE; OpenRegionProcedure 26b88d4eba538c77983846412998755e, server=jenkins-hbase4.apache.org,33541,1689916455330}] 2023-07-21 05:14:20,430 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8. 2023-07-21 05:14:20,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 01237b35b0dfaf916d9d99e78d7d98c8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 05:14:20,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 01237b35b0dfaf916d9d99e78d7d98c8 2023-07-21 05:14:20,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:20,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 01237b35b0dfaf916d9d99e78d7d98c8 2023-07-21 05:14:20,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 01237b35b0dfaf916d9d99e78d7d98c8 2023-07-21 05:14:20,432 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289. 2023-07-21 05:14:20,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3a3e96c0d83cc4f475b4caf12263f289, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 05:14:20,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3a3e96c0d83cc4f475b4caf12263f289 2023-07-21 05:14:20,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:20,432 INFO [StoreOpener-01237b35b0dfaf916d9d99e78d7d98c8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 01237b35b0dfaf916d9d99e78d7d98c8 2023-07-21 05:14:20,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3a3e96c0d83cc4f475b4caf12263f289 2023-07-21 05:14:20,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3a3e96c0d83cc4f475b4caf12263f289 2023-07-21 05:14:20,434 INFO [StoreOpener-3a3e96c0d83cc4f475b4caf12263f289-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3a3e96c0d83cc4f475b4caf12263f289 2023-07-21 05:14:20,434 DEBUG [StoreOpener-01237b35b0dfaf916d9d99e78d7d98c8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/01237b35b0dfaf916d9d99e78d7d98c8/f 2023-07-21 05:14:20,434 DEBUG [StoreOpener-01237b35b0dfaf916d9d99e78d7d98c8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/01237b35b0dfaf916d9d99e78d7d98c8/f 2023-07-21 05:14:20,435 INFO [StoreOpener-01237b35b0dfaf916d9d99e78d7d98c8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 01237b35b0dfaf916d9d99e78d7d98c8 columnFamilyName f 2023-07-21 05:14:20,435 INFO [StoreOpener-01237b35b0dfaf916d9d99e78d7d98c8-1] regionserver.HStore(310): Store=01237b35b0dfaf916d9d99e78d7d98c8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:20,435 DEBUG [StoreOpener-3a3e96c0d83cc4f475b4caf12263f289-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/3a3e96c0d83cc4f475b4caf12263f289/f 2023-07-21 05:14:20,435 DEBUG [StoreOpener-3a3e96c0d83cc4f475b4caf12263f289-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/3a3e96c0d83cc4f475b4caf12263f289/f 2023-07-21 05:14:20,436 INFO [StoreOpener-3a3e96c0d83cc4f475b4caf12263f289-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3a3e96c0d83cc4f475b4caf12263f289 columnFamilyName f 2023-07-21 05:14:20,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/01237b35b0dfaf916d9d99e78d7d98c8 2023-07-21 05:14:20,436 INFO [StoreOpener-3a3e96c0d83cc4f475b4caf12263f289-1] regionserver.HStore(310): Store=3a3e96c0d83cc4f475b4caf12263f289/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:20,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/01237b35b0dfaf916d9d99e78d7d98c8 2023-07-21 05:14:20,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/3a3e96c0d83cc4f475b4caf12263f289 2023-07-21 05:14:20,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/3a3e96c0d83cc4f475b4caf12263f289 2023-07-21 05:14:20,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 01237b35b0dfaf916d9d99e78d7d98c8 2023-07-21 05:14:20,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3a3e96c0d83cc4f475b4caf12263f289 2023-07-21 05:14:20,443 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/01237b35b0dfaf916d9d99e78d7d98c8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:20,444 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 01237b35b0dfaf916d9d99e78d7d98c8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12012580000, jitterRate=0.11875869333744049}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:20,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 01237b35b0dfaf916d9d99e78d7d98c8: 2023-07-21 05:14:20,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/3a3e96c0d83cc4f475b4caf12263f289/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:20,446 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8., pid=59, masterSystemTime=1689916460425 2023-07-21 05:14:20,447 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3a3e96c0d83cc4f475b4caf12263f289; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11573458400, jitterRate=0.07786230742931366}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:20,447 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3a3e96c0d83cc4f475b4caf12263f289: 2023-07-21 05:14:20,448 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289., pid=60, masterSystemTime=1689916460428 2023-07-21 05:14:20,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8. 2023-07-21 05:14:20,449 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8. 2023-07-21 05:14:20,449 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c. 2023-07-21 05:14:20,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8661a56845e86934e0c495f3a00e284c, NAME => 'Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 05:14:20,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8661a56845e86934e0c495f3a00e284c 2023-07-21 05:14:20,450 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=01237b35b0dfaf916d9d99e78d7d98c8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:20,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:20,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8661a56845e86934e0c495f3a00e284c 2023-07-21 05:14:20,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8661a56845e86934e0c495f3a00e284c 2023-07-21 05:14:20,450 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916460450"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916460450"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916460450"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916460450"}]},"ts":"1689916460450"} 2023-07-21 05:14:20,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289. 2023-07-21 05:14:20,450 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289. 2023-07-21 05:14:20,451 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e. 2023-07-21 05:14:20,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 26b88d4eba538c77983846412998755e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 05:14:20,451 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=3a3e96c0d83cc4f475b4caf12263f289, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:20,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 26b88d4eba538c77983846412998755e 2023-07-21 05:14:20,451 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916460451"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916460451"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916460451"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916460451"}]},"ts":"1689916460451"} 2023-07-21 05:14:20,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:20,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 26b88d4eba538c77983846412998755e 2023-07-21 05:14:20,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 26b88d4eba538c77983846412998755e 2023-07-21 05:14:20,453 INFO [StoreOpener-8661a56845e86934e0c495f3a00e284c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8661a56845e86934e0c495f3a00e284c 2023-07-21 05:14:20,456 DEBUG [StoreOpener-8661a56845e86934e0c495f3a00e284c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/8661a56845e86934e0c495f3a00e284c/f 2023-07-21 05:14:20,456 DEBUG [StoreOpener-8661a56845e86934e0c495f3a00e284c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/8661a56845e86934e0c495f3a00e284c/f 2023-07-21 05:14:20,456 INFO [StoreOpener-8661a56845e86934e0c495f3a00e284c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8661a56845e86934e0c495f3a00e284c columnFamilyName f 2023-07-21 05:14:20,457 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=55 2023-07-21 05:14:20,457 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=55, state=SUCCESS; OpenRegionProcedure 01237b35b0dfaf916d9d99e78d7d98c8, server=jenkins-hbase4.apache.org,40677,1689916451367 in 183 msec 2023-07-21 05:14:20,457 INFO [StoreOpener-8661a56845e86934e0c495f3a00e284c-1] regionserver.HStore(310): Store=8661a56845e86934e0c495f3a00e284c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:20,458 INFO [StoreOpener-26b88d4eba538c77983846412998755e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 26b88d4eba538c77983846412998755e 2023-07-21 05:14:20,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/8661a56845e86934e0c495f3a00e284c 2023-07-21 05:14:20,459 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=57 2023-07-21 05:14:20,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/8661a56845e86934e0c495f3a00e284c 2023-07-21 05:14:20,459 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=57, state=SUCCESS; OpenRegionProcedure 3a3e96c0d83cc4f475b4caf12263f289, server=jenkins-hbase4.apache.org,33541,1689916455330 in 179 msec 2023-07-21 05:14:20,460 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01237b35b0dfaf916d9d99e78d7d98c8, ASSIGN in 349 msec 2023-07-21 05:14:20,461 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3a3e96c0d83cc4f475b4caf12263f289, ASSIGN in 351 msec 2023-07-21 05:14:20,463 DEBUG [StoreOpener-26b88d4eba538c77983846412998755e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/26b88d4eba538c77983846412998755e/f 2023-07-21 05:14:20,463 DEBUG [StoreOpener-26b88d4eba538c77983846412998755e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/26b88d4eba538c77983846412998755e/f 2023-07-21 05:14:20,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8661a56845e86934e0c495f3a00e284c 2023-07-21 05:14:20,464 INFO [StoreOpener-26b88d4eba538c77983846412998755e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 26b88d4eba538c77983846412998755e columnFamilyName f 2023-07-21 05:14:20,464 INFO [StoreOpener-26b88d4eba538c77983846412998755e-1] regionserver.HStore(310): Store=26b88d4eba538c77983846412998755e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:20,466 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/26b88d4eba538c77983846412998755e 2023-07-21 05:14:20,466 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/8661a56845e86934e0c495f3a00e284c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:20,466 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/26b88d4eba538c77983846412998755e 2023-07-21 05:14:20,467 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8661a56845e86934e0c495f3a00e284c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9410279040, jitterRate=-0.12359946966171265}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:20,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8661a56845e86934e0c495f3a00e284c: 2023-07-21 05:14:20,468 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c., pid=61, masterSystemTime=1689916460425 2023-07-21 05:14:20,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c. 2023-07-21 05:14:20,470 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c. 2023-07-21 05:14:20,470 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4. 2023-07-21 05:14:20,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dade82b17d8e96876b676eb8915347f4, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 05:14:20,471 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=8661a56845e86934e0c495f3a00e284c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:20,471 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916460471"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916460471"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916460471"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916460471"}]},"ts":"1689916460471"} 2023-07-21 05:14:20,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop dade82b17d8e96876b676eb8915347f4 2023-07-21 05:14:20,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:20,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dade82b17d8e96876b676eb8915347f4 2023-07-21 05:14:20,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dade82b17d8e96876b676eb8915347f4 2023-07-21 05:14:20,473 INFO [StoreOpener-dade82b17d8e96876b676eb8915347f4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region dade82b17d8e96876b676eb8915347f4 2023-07-21 05:14:20,475 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 26b88d4eba538c77983846412998755e 2023-07-21 05:14:20,482 DEBUG [StoreOpener-dade82b17d8e96876b676eb8915347f4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/dade82b17d8e96876b676eb8915347f4/f 2023-07-21 05:14:20,483 DEBUG [StoreOpener-dade82b17d8e96876b676eb8915347f4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/dade82b17d8e96876b676eb8915347f4/f 2023-07-21 05:14:20,483 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=53 2023-07-21 05:14:20,483 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=53, state=SUCCESS; OpenRegionProcedure 8661a56845e86934e0c495f3a00e284c, server=jenkins-hbase4.apache.org,40677,1689916451367 in 196 msec 2023-07-21 05:14:20,483 INFO [StoreOpener-dade82b17d8e96876b676eb8915347f4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dade82b17d8e96876b676eb8915347f4 columnFamilyName f 2023-07-21 05:14:20,485 INFO [StoreOpener-dade82b17d8e96876b676eb8915347f4-1] regionserver.HStore(310): Store=dade82b17d8e96876b676eb8915347f4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:20,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/26b88d4eba538c77983846412998755e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:20,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/dade82b17d8e96876b676eb8915347f4 2023-07-21 05:14:20,487 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 26b88d4eba538c77983846412998755e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11333315680, jitterRate=0.05549727380275726}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:20,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 26b88d4eba538c77983846412998755e: 2023-07-21 05:14:20,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/dade82b17d8e96876b676eb8915347f4 2023-07-21 05:14:20,487 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8661a56845e86934e0c495f3a00e284c, ASSIGN in 378 msec 2023-07-21 05:14:20,488 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e., pid=62, masterSystemTime=1689916460428 2023-07-21 05:14:20,491 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e. 2023-07-21 05:14:20,491 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e. 2023-07-21 05:14:20,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dade82b17d8e96876b676eb8915347f4 2023-07-21 05:14:20,492 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=26b88d4eba538c77983846412998755e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:20,492 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916460492"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916460492"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916460492"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916460492"}]},"ts":"1689916460492"} 2023-07-21 05:14:20,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/dade82b17d8e96876b676eb8915347f4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:20,498 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=54 2023-07-21 05:14:20,498 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=54, state=SUCCESS; OpenRegionProcedure 26b88d4eba538c77983846412998755e, server=jenkins-hbase4.apache.org,33541,1689916455330 in 216 msec 2023-07-21 05:14:20,499 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dade82b17d8e96876b676eb8915347f4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10242416320, jitterRate=-0.04610064625740051}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:20,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dade82b17d8e96876b676eb8915347f4: 2023-07-21 05:14:20,500 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4., pid=58, masterSystemTime=1689916460425 2023-07-21 05:14:20,500 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=26b88d4eba538c77983846412998755e, ASSIGN in 390 msec 2023-07-21 05:14:20,501 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4. 2023-07-21 05:14:20,501 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4. 2023-07-21 05:14:20,502 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=dade82b17d8e96876b676eb8915347f4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:20,502 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916460502"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916460502"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916460502"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916460502"}]},"ts":"1689916460502"} 2023-07-21 05:14:20,508 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=56 2023-07-21 05:14:20,508 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=56, state=SUCCESS; OpenRegionProcedure dade82b17d8e96876b676eb8915347f4, server=jenkins-hbase4.apache.org,40677,1689916451367 in 236 msec 2023-07-21 05:14:20,510 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=52 2023-07-21 05:14:20,510 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dade82b17d8e96876b676eb8915347f4, ASSIGN in 400 msec 2023-07-21 05:14:20,510 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916460510"}]},"ts":"1689916460510"} 2023-07-21 05:14:20,512 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-21 05:14:20,514 DEBUG [PEWorker-4] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-21 05:14:20,516 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=52, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 1.0910 sec 2023-07-21 05:14:20,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-21 05:14:20,544 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 52 completed 2023-07-21 05:14:20,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:20,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:20,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:20,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:20,548 INFO [Listener at localhost/34619] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:20,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:20,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=63, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:20,557 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916460556"}]},"ts":"1689916460556"} 2023-07-21 05:14:20,559 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-21 05:14:20,560 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-21 05:14:20,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-21 05:14:20,563 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8661a56845e86934e0c495f3a00e284c, UNASSIGN}, {pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=26b88d4eba538c77983846412998755e, UNASSIGN}, {pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01237b35b0dfaf916d9d99e78d7d98c8, UNASSIGN}, {pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dade82b17d8e96876b676eb8915347f4, UNASSIGN}, {pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3a3e96c0d83cc4f475b4caf12263f289, UNASSIGN}] 2023-07-21 05:14:20,565 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dade82b17d8e96876b676eb8915347f4, UNASSIGN 2023-07-21 05:14:20,565 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3a3e96c0d83cc4f475b4caf12263f289, UNASSIGN 2023-07-21 05:14:20,565 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01237b35b0dfaf916d9d99e78d7d98c8, UNASSIGN 2023-07-21 05:14:20,566 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=26b88d4eba538c77983846412998755e, UNASSIGN 2023-07-21 05:14:20,566 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8661a56845e86934e0c495f3a00e284c, UNASSIGN 2023-07-21 05:14:20,567 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=dade82b17d8e96876b676eb8915347f4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:20,567 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916460567"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916460567"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916460567"}]},"ts":"1689916460567"} 2023-07-21 05:14:20,567 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=3a3e96c0d83cc4f475b4caf12263f289, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:20,568 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=01237b35b0dfaf916d9d99e78d7d98c8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:20,568 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=8661a56845e86934e0c495f3a00e284c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:20,568 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916460568"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916460568"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916460568"}]},"ts":"1689916460568"} 2023-07-21 05:14:20,568 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916460568"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916460568"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916460568"}]},"ts":"1689916460568"} 2023-07-21 05:14:20,568 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916460567"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916460567"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916460567"}]},"ts":"1689916460567"} 2023-07-21 05:14:20,568 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=26b88d4eba538c77983846412998755e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:20,568 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916460568"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916460568"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916460568"}]},"ts":"1689916460568"} 2023-07-21 05:14:20,570 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=67, state=RUNNABLE; CloseRegionProcedure dade82b17d8e96876b676eb8915347f4, server=jenkins-hbase4.apache.org,40677,1689916451367}] 2023-07-21 05:14:20,571 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=66, state=RUNNABLE; CloseRegionProcedure 01237b35b0dfaf916d9d99e78d7d98c8, server=jenkins-hbase4.apache.org,40677,1689916451367}] 2023-07-21 05:14:20,573 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=64, state=RUNNABLE; CloseRegionProcedure 8661a56845e86934e0c495f3a00e284c, server=jenkins-hbase4.apache.org,40677,1689916451367}] 2023-07-21 05:14:20,575 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=68, state=RUNNABLE; CloseRegionProcedure 3a3e96c0d83cc4f475b4caf12263f289, server=jenkins-hbase4.apache.org,33541,1689916455330}] 2023-07-21 05:14:20,577 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=65, state=RUNNABLE; CloseRegionProcedure 26b88d4eba538c77983846412998755e, server=jenkins-hbase4.apache.org,33541,1689916455330}] 2023-07-21 05:14:20,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-21 05:14:20,726 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dade82b17d8e96876b676eb8915347f4 2023-07-21 05:14:20,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dade82b17d8e96876b676eb8915347f4, disabling compactions & flushes 2023-07-21 05:14:20,728 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4. 2023-07-21 05:14:20,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4. 2023-07-21 05:14:20,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4. after waiting 0 ms 2023-07-21 05:14:20,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4. 2023-07-21 05:14:20,730 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 26b88d4eba538c77983846412998755e 2023-07-21 05:14:20,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 26b88d4eba538c77983846412998755e, disabling compactions & flushes 2023-07-21 05:14:20,732 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e. 2023-07-21 05:14:20,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e. 2023-07-21 05:14:20,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e. after waiting 0 ms 2023-07-21 05:14:20,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e. 2023-07-21 05:14:20,739 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/26b88d4eba538c77983846412998755e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:20,739 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/dade82b17d8e96876b676eb8915347f4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:20,739 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e. 2023-07-21 05:14:20,739 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 26b88d4eba538c77983846412998755e: 2023-07-21 05:14:20,742 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 26b88d4eba538c77983846412998755e 2023-07-21 05:14:20,742 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3a3e96c0d83cc4f475b4caf12263f289 2023-07-21 05:14:20,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3a3e96c0d83cc4f475b4caf12263f289, disabling compactions & flushes 2023-07-21 05:14:20,743 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289. 2023-07-21 05:14:20,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289. 2023-07-21 05:14:20,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289. after waiting 0 ms 2023-07-21 05:14:20,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289. 2023-07-21 05:14:20,744 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4. 2023-07-21 05:14:20,744 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dade82b17d8e96876b676eb8915347f4: 2023-07-21 05:14:20,745 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=26b88d4eba538c77983846412998755e, regionState=CLOSED 2023-07-21 05:14:20,746 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916460745"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916460745"}]},"ts":"1689916460745"} 2023-07-21 05:14:20,747 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dade82b17d8e96876b676eb8915347f4 2023-07-21 05:14:20,747 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 01237b35b0dfaf916d9d99e78d7d98c8 2023-07-21 05:14:20,748 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 01237b35b0dfaf916d9d99e78d7d98c8, disabling compactions & flushes 2023-07-21 05:14:20,749 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8. 2023-07-21 05:14:20,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8. 2023-07-21 05:14:20,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8. after waiting 0 ms 2023-07-21 05:14:20,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8. 2023-07-21 05:14:20,751 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=dade82b17d8e96876b676eb8915347f4, regionState=CLOSED 2023-07-21 05:14:20,751 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916460751"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916460751"}]},"ts":"1689916460751"} 2023-07-21 05:14:20,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/3a3e96c0d83cc4f475b4caf12263f289/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:20,761 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289. 2023-07-21 05:14:20,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3a3e96c0d83cc4f475b4caf12263f289: 2023-07-21 05:14:20,763 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=65 2023-07-21 05:14:20,763 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=65, state=SUCCESS; CloseRegionProcedure 26b88d4eba538c77983846412998755e, server=jenkins-hbase4.apache.org,33541,1689916455330 in 173 msec 2023-07-21 05:14:20,765 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3a3e96c0d83cc4f475b4caf12263f289 2023-07-21 05:14:20,766 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=67 2023-07-21 05:14:20,766 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=67, state=SUCCESS; CloseRegionProcedure dade82b17d8e96876b676eb8915347f4, server=jenkins-hbase4.apache.org,40677,1689916451367 in 191 msec 2023-07-21 05:14:20,767 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=26b88d4eba538c77983846412998755e, UNASSIGN in 200 msec 2023-07-21 05:14:20,767 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=3a3e96c0d83cc4f475b4caf12263f289, regionState=CLOSED 2023-07-21 05:14:20,767 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916460767"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916460767"}]},"ts":"1689916460767"} 2023-07-21 05:14:20,770 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=dade82b17d8e96876b676eb8915347f4, UNASSIGN in 203 msec 2023-07-21 05:14:20,773 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=68 2023-07-21 05:14:20,773 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=68, state=SUCCESS; CloseRegionProcedure 3a3e96c0d83cc4f475b4caf12263f289, server=jenkins-hbase4.apache.org,33541,1689916455330 in 194 msec 2023-07-21 05:14:20,775 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3a3e96c0d83cc4f475b4caf12263f289, UNASSIGN in 210 msec 2023-07-21 05:14:20,776 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/01237b35b0dfaf916d9d99e78d7d98c8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:20,777 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8. 2023-07-21 05:14:20,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 01237b35b0dfaf916d9d99e78d7d98c8: 2023-07-21 05:14:20,780 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 01237b35b0dfaf916d9d99e78d7d98c8 2023-07-21 05:14:20,780 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8661a56845e86934e0c495f3a00e284c 2023-07-21 05:14:20,782 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8661a56845e86934e0c495f3a00e284c, disabling compactions & flushes 2023-07-21 05:14:20,782 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c. 2023-07-21 05:14:20,782 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c. 2023-07-21 05:14:20,782 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c. after waiting 0 ms 2023-07-21 05:14:20,782 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c. 2023-07-21 05:14:20,787 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=01237b35b0dfaf916d9d99e78d7d98c8, regionState=CLOSED 2023-07-21 05:14:20,787 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689916460787"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916460787"}]},"ts":"1689916460787"} 2023-07-21 05:14:20,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testTableMoveTruncateAndDrop/8661a56845e86934e0c495f3a00e284c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:20,793 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c. 2023-07-21 05:14:20,793 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=66 2023-07-21 05:14:20,793 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8661a56845e86934e0c495f3a00e284c: 2023-07-21 05:14:20,793 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=66, state=SUCCESS; CloseRegionProcedure 01237b35b0dfaf916d9d99e78d7d98c8, server=jenkins-hbase4.apache.org,40677,1689916451367 in 219 msec 2023-07-21 05:14:20,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8661a56845e86934e0c495f3a00e284c 2023-07-21 05:14:20,803 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01237b35b0dfaf916d9d99e78d7d98c8, UNASSIGN in 230 msec 2023-07-21 05:14:20,803 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=8661a56845e86934e0c495f3a00e284c, regionState=CLOSED 2023-07-21 05:14:20,803 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689916460803"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916460803"}]},"ts":"1689916460803"} 2023-07-21 05:14:20,808 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=64 2023-07-21 05:14:20,808 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=64, state=SUCCESS; CloseRegionProcedure 8661a56845e86934e0c495f3a00e284c, server=jenkins-hbase4.apache.org,40677,1689916451367 in 232 msec 2023-07-21 05:14:20,812 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=63 2023-07-21 05:14:20,812 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8661a56845e86934e0c495f3a00e284c, UNASSIGN in 245 msec 2023-07-21 05:14:20,813 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916460813"}]},"ts":"1689916460813"} 2023-07-21 05:14:20,815 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-21 05:14:20,817 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-21 05:14:20,819 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=63, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 268 msec 2023-07-21 05:14:20,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-21 05:14:20,867 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 63 completed 2023-07-21 05:14:20,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:20,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:20,881 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:20,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1156714162' 2023-07-21 05:14:20,882 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=74, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:20,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:20,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:20,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:20,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:20,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-21 05:14:20,896 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8661a56845e86934e0c495f3a00e284c 2023-07-21 05:14:20,896 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/26b88d4eba538c77983846412998755e 2023-07-21 05:14:20,896 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3a3e96c0d83cc4f475b4caf12263f289 2023-07-21 05:14:20,896 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01237b35b0dfaf916d9d99e78d7d98c8 2023-07-21 05:14:20,896 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dade82b17d8e96876b676eb8915347f4 2023-07-21 05:14:20,899 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8661a56845e86934e0c495f3a00e284c/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8661a56845e86934e0c495f3a00e284c/recovered.edits] 2023-07-21 05:14:20,899 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/26b88d4eba538c77983846412998755e/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/26b88d4eba538c77983846412998755e/recovered.edits] 2023-07-21 05:14:20,899 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3a3e96c0d83cc4f475b4caf12263f289/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3a3e96c0d83cc4f475b4caf12263f289/recovered.edits] 2023-07-21 05:14:20,900 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01237b35b0dfaf916d9d99e78d7d98c8/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01237b35b0dfaf916d9d99e78d7d98c8/recovered.edits] 2023-07-21 05:14:20,900 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dade82b17d8e96876b676eb8915347f4/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dade82b17d8e96876b676eb8915347f4/recovered.edits] 2023-07-21 05:14:20,911 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/26b88d4eba538c77983846412998755e/recovered.edits/4.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/Group_testTableMoveTruncateAndDrop/26b88d4eba538c77983846412998755e/recovered.edits/4.seqid 2023-07-21 05:14:20,911 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8661a56845e86934e0c495f3a00e284c/recovered.edits/4.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/Group_testTableMoveTruncateAndDrop/8661a56845e86934e0c495f3a00e284c/recovered.edits/4.seqid 2023-07-21 05:14:20,911 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3a3e96c0d83cc4f475b4caf12263f289/recovered.edits/4.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/Group_testTableMoveTruncateAndDrop/3a3e96c0d83cc4f475b4caf12263f289/recovered.edits/4.seqid 2023-07-21 05:14:20,911 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/26b88d4eba538c77983846412998755e 2023-07-21 05:14:20,912 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8661a56845e86934e0c495f3a00e284c 2023-07-21 05:14:20,915 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3a3e96c0d83cc4f475b4caf12263f289 2023-07-21 05:14:20,916 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01237b35b0dfaf916d9d99e78d7d98c8/recovered.edits/4.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/Group_testTableMoveTruncateAndDrop/01237b35b0dfaf916d9d99e78d7d98c8/recovered.edits/4.seqid 2023-07-21 05:14:20,916 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dade82b17d8e96876b676eb8915347f4/recovered.edits/4.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/Group_testTableMoveTruncateAndDrop/dade82b17d8e96876b676eb8915347f4/recovered.edits/4.seqid 2023-07-21 05:14:20,917 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01237b35b0dfaf916d9d99e78d7d98c8 2023-07-21 05:14:20,917 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/dade82b17d8e96876b676eb8915347f4 2023-07-21 05:14:20,917 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 05:14:20,920 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=74, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:20,927 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-21 05:14:20,929 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-21 05:14:20,931 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=74, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:20,931 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-21 05:14:20,931 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916460931"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:20,931 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916460931"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:20,931 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916460931"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:20,931 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916460931"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:20,931 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916460931"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:20,933 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 05:14:20,933 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 8661a56845e86934e0c495f3a00e284c, NAME => 'Group_testTableMoveTruncateAndDrop,,1689916459501.8661a56845e86934e0c495f3a00e284c.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 26b88d4eba538c77983846412998755e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689916459501.26b88d4eba538c77983846412998755e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 01237b35b0dfaf916d9d99e78d7d98c8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689916459501.01237b35b0dfaf916d9d99e78d7d98c8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => dade82b17d8e96876b676eb8915347f4, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689916459501.dade82b17d8e96876b676eb8915347f4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 3a3e96c0d83cc4f475b4caf12263f289, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689916459501.3a3e96c0d83cc4f475b4caf12263f289.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 05:14:20,933 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-21 05:14:20,934 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689916460933"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:20,935 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-21 05:14:20,938 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=74, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 05:14:20,944 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 65 msec 2023-07-21 05:14:20,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-21 05:14:20,997 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 74 completed 2023-07-21 05:14:20,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:20,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:21,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:21,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:21,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:21,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:21,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:21,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:33541] to rsgroup default 2023-07-21 05:14:21,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:21,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:21,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:21,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:21,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1156714162, current retry=0 2023-07-21 05:14:21,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33541,1689916455330, jenkins-hbase4.apache.org,40677,1689916451367] are moved back to Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:21,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1156714162 => default 2023-07-21 05:14:21,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:21,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_1156714162 2023-07-21 05:14:21,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:21,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:21,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 05:14:21,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:21,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:21,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:21,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:21,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:21,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:21,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:21,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:21,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:21,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:21,047 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:21,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:21,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:21,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:21,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:21,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:21,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:21,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:21,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42467] to rsgroup master 2023-07-21 05:14:21,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:21,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 146 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40408 deadline: 1689917661063, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. 2023-07-21 05:14:21,064 WARN [Listener at localhost/34619] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:21,065 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:21,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:21,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:21,067 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:42093, jenkins-hbase4.apache.org:42315], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:21,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:21,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:21,096 INFO [Listener at localhost/34619] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=496 (was 426) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1611416701_17 at /127.0.0.1:43304 [Receiving block BP-491990667-172.31.14.131-1689916444933:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55013@0x28e182c6-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1611416701_17 at /127.0.0.1:32916 [Receiving block BP-491990667-172.31.14.131-1689916444933:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1611416701_17 at /127.0.0.1:59098 [Receiving block BP-491990667-172.31.14.131-1689916444933:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:33541-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp187948242-645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-54c35b24-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f3aeb7a-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55013@0x28e182c6-SendThread(127.0.0.1:55013) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp187948242-641-acceptor-0@564ec7c4-ServerConnector@2373ab06{HTTP/1.1, (http/1.1)}{0.0.0.0:46017} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp187948242-646 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f3aeb7a-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:38517 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp187948242-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f3aeb7a-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:38517 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp187948242-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp187948242-640 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f3aeb7a-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f3aeb7a-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp187948242-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x78ef668c-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55013@0x28e182c6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/136145594.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf-prefix:jenkins-hbase4.apache.org,33541,1689916455330 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x78ef668c-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-491990667-172.31.14.131-1689916444933:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:33541 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1611416701_17 at /127.0.0.1:32810 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-145937958_17 at /127.0.0.1:59120 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:33541Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-491990667-172.31.14.131-1689916444933:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp187948242-647 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f3aeb7a-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-491990667-172.31.14.131-1689916444933:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=768 (was 679) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=501 (was 527), ProcessCount=178 (was 178), AvailableMemoryMB=4235 (was 4516) 2023-07-21 05:14:21,118 INFO [Listener at localhost/34619] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=496, OpenFileDescriptor=768, MaxFileDescriptor=60000, SystemLoadAverage=501, ProcessCount=178, AvailableMemoryMB=4234 2023-07-21 05:14:21,118 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-21 05:14:21,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:21,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:21,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:21,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:21,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:21,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:21,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:21,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:21,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:21,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:21,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:21,154 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:21,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:21,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:21,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:21,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:21,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:21,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:21,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:21,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42467] to rsgroup master 2023-07-21 05:14:21,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:21,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 174 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40408 deadline: 1689917661167, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. 2023-07-21 05:14:21,168 WARN [Listener at localhost/34619] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:21,169 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:21,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:21,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:21,171 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:42093, jenkins-hbase4.apache.org:42315], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:21,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:21,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:21,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-21 05:14:21,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:21,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 180 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:40408 deadline: 1689917661173, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 05:14:21,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-21 05:14:21,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:21,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:40408 deadline: 1689917661175, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 05:14:21,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-21 05:14:21,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:21,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:40408 deadline: 1689917661177, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 05:14:21,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-21 05:14:21,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-21 05:14:21,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:21,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:21,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:21,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:21,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:21,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:21,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:21,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:21,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:21,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:21,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:21,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:21,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:21,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-21 05:14:21,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:21,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:21,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 05:14:21,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:21,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:21,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:21,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:21,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:21,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:21,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:21,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:21,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:21,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:21,222 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:21,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:21,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:21,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:21,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:21,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:21,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:21,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:21,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42467] to rsgroup master 2023-07-21 05:14:21,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:21,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 218 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40408 deadline: 1689917661240, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. 2023-07-21 05:14:21,240 WARN [Listener at localhost/34619] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:21,242 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:21,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:21,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:21,243 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:42093, jenkins-hbase4.apache.org:42315], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:21,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:21,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:21,262 INFO [Listener at localhost/34619] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=499 (was 496) Potentially hanging thread: hconnection-0x78ef668c-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x78ef668c-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x78ef668c-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=768 (was 768), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=501 (was 501), ProcessCount=178 (was 178), AvailableMemoryMB=4224 (was 4234) 2023-07-21 05:14:21,282 INFO [Listener at localhost/34619] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=499, OpenFileDescriptor=768, MaxFileDescriptor=60000, SystemLoadAverage=501, ProcessCount=178, AvailableMemoryMB=4224 2023-07-21 05:14:21,283 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-21 05:14:21,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:21,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:21,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:21,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:21,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:21,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:21,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:21,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:21,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:21,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:21,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:21,318 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:21,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:21,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:21,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:21,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:21,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:21,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:21,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:21,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42467] to rsgroup master 2023-07-21 05:14:21,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:21,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 246 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40408 deadline: 1689917661355, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. 2023-07-21 05:14:21,357 WARN [Listener at localhost/34619] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:21,359 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:21,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:21,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:21,362 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:42093, jenkins-hbase4.apache.org:42315], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:21,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:21,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:21,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:21,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:21,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:21,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:21,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-21 05:14:21,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:21,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 05:14:21,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:21,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:21,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:21,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:21,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:21,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:42093] to rsgroup bar 2023-07-21 05:14:21,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:21,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 05:14:21,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:21,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:21,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(238): Moving server region ede3ac9f206f1997341b19733c39fd22, which do not belong to RSGroup bar 2023-07-21 05:14:21,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=ede3ac9f206f1997341b19733c39fd22, REOPEN/MOVE 2023-07-21 05:14:21,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-21 05:14:21,404 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=ede3ac9f206f1997341b19733c39fd22, REOPEN/MOVE 2023-07-21 05:14:21,405 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=ede3ac9f206f1997341b19733c39fd22, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:21,405 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689916461405"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916461405"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916461405"}]},"ts":"1689916461405"} 2023-07-21 05:14:21,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 05:14:21,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-21 05:14:21,407 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-21 05:14:21,408 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42093,1689916451283, state=CLOSING 2023-07-21 05:14:21,408 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=75, state=RUNNABLE; CloseRegionProcedure ede3ac9f206f1997341b19733c39fd22, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:21,410 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 05:14:21,410 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=76, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:21,410 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 05:14:21,562 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:21,562 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-21 05:14:21,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ede3ac9f206f1997341b19733c39fd22, disabling compactions & flushes 2023-07-21 05:14:21,564 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 05:14:21,564 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:21,564 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 05:14:21,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:21,564 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 05:14:21,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. after waiting 0 ms 2023-07-21 05:14:21,564 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 05:14:21,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:21,564 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 05:14:21,565 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=41.95 KB heapSize=64.95 KB 2023-07-21 05:14:21,565 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing ede3ac9f206f1997341b19733c39fd22 1/1 column families, dataSize=4.99 KB heapSize=8.40 KB 2023-07-21 05:14:21,627 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.99 KB at sequenceid=32 (bloomFilter=true), to=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/.tmp/m/05ab717944c24ee68a945b2245eaf3bf 2023-07-21 05:14:21,637 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.88 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/.tmp/info/17957644be3a420bb66cba34f3200437 2023-07-21 05:14:21,641 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 05ab717944c24ee68a945b2245eaf3bf 2023-07-21 05:14:21,643 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/.tmp/m/05ab717944c24ee68a945b2245eaf3bf as hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/m/05ab717944c24ee68a945b2245eaf3bf 2023-07-21 05:14:21,645 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 17957644be3a420bb66cba34f3200437 2023-07-21 05:14:21,653 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 05ab717944c24ee68a945b2245eaf3bf 2023-07-21 05:14:21,653 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/m/05ab717944c24ee68a945b2245eaf3bf, entries=9, sequenceid=32, filesize=5.5 K 2023-07-21 05:14:21,655 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.99 KB/5109, heapSize ~8.38 KB/8584, currentSize=0 B/0 for ede3ac9f206f1997341b19733c39fd22 in 90ms, sequenceid=32, compaction requested=false 2023-07-21 05:14:21,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/recovered.edits/35.seqid, newMaxSeqId=35, maxSeqId=12 2023-07-21 05:14:21,667 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/.tmp/rep_barrier/4522a590c8af44c5a5e7d06d6639367f 2023-07-21 05:14:21,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 05:14:21,668 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:21,668 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ede3ac9f206f1997341b19733c39fd22: 2023-07-21 05:14:21,668 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ede3ac9f206f1997341b19733c39fd22 move to jenkins-hbase4.apache.org,42315,1689916451166 record at close sequenceid=32 2023-07-21 05:14:21,670 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=77, ppid=75, state=RUNNABLE; CloseRegionProcedure ede3ac9f206f1997341b19733c39fd22, server=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:21,670 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:21,675 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4522a590c8af44c5a5e7d06d6639367f 2023-07-21 05:14:21,691 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.91 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/.tmp/table/2490511508c54014b6f71fdd409b9e01 2023-07-21 05:14:21,698 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2490511508c54014b6f71fdd409b9e01 2023-07-21 05:14:21,703 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/.tmp/info/17957644be3a420bb66cba34f3200437 as hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/info/17957644be3a420bb66cba34f3200437 2023-07-21 05:14:21,713 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 17957644be3a420bb66cba34f3200437 2023-07-21 05:14:21,713 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/info/17957644be3a420bb66cba34f3200437, entries=45, sequenceid=95, filesize=10.0 K 2023-07-21 05:14:21,715 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/.tmp/rep_barrier/4522a590c8af44c5a5e7d06d6639367f as hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/rep_barrier/4522a590c8af44c5a5e7d06d6639367f 2023-07-21 05:14:21,730 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4522a590c8af44c5a5e7d06d6639367f 2023-07-21 05:14:21,731 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/rep_barrier/4522a590c8af44c5a5e7d06d6639367f, entries=10, sequenceid=95, filesize=6.1 K 2023-07-21 05:14:21,732 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/.tmp/table/2490511508c54014b6f71fdd409b9e01 as hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/table/2490511508c54014b6f71fdd409b9e01 2023-07-21 05:14:21,742 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2490511508c54014b6f71fdd409b9e01 2023-07-21 05:14:21,742 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/table/2490511508c54014b6f71fdd409b9e01, entries=15, sequenceid=95, filesize=6.2 K 2023-07-21 05:14:21,744 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~41.95 KB/42953, heapSize ~64.90 KB/66456, currentSize=0 B/0 for 1588230740 in 178ms, sequenceid=95, compaction requested=false 2023-07-21 05:14:21,769 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=1 2023-07-21 05:14:21,770 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 05:14:21,771 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 05:14:21,771 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 05:14:21,771 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,42315,1689916451166 record at close sequenceid=95 2023-07-21 05:14:21,773 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-21 05:14:21,774 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-21 05:14:21,777 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=76 2023-07-21 05:14:21,777 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=76, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42093,1689916451283 in 364 msec 2023-07-21 05:14:21,779 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42315,1689916451166; forceNewPlan=false, retain=false 2023-07-21 05:14:21,929 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42315,1689916451166, state=OPENING 2023-07-21 05:14:21,931 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 05:14:21,934 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=76, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:21,934 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 05:14:22,093 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 05:14:22,093 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:22,095 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42315%2C1689916451166.meta, suffix=.meta, logDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/WALs/jenkins-hbase4.apache.org,42315,1689916451166, archiveDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/oldWALs, maxLogs=32 2023-07-21 05:14:22,117 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45983,DS-b78fc2a6-5cc1-456d-a1aa-9dc4e0ee367f,DISK] 2023-07-21 05:14:22,118 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44623,DS-7c451091-9046-4b1d-8a3f-4d703150a8ab,DISK] 2023-07-21 05:14:22,119 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38349,DS-bcb8a3a0-04d5-494e-8dff-602b9a3744dc,DISK] 2023-07-21 05:14:22,122 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/WALs/jenkins-hbase4.apache.org,42315,1689916451166/jenkins-hbase4.apache.org%2C42315%2C1689916451166.meta.1689916462096.meta 2023-07-21 05:14:22,122 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45983,DS-b78fc2a6-5cc1-456d-a1aa-9dc4e0ee367f,DISK], DatanodeInfoWithStorage[127.0.0.1:44623,DS-7c451091-9046-4b1d-8a3f-4d703150a8ab,DISK], DatanodeInfoWithStorage[127.0.0.1:38349,DS-bcb8a3a0-04d5-494e-8dff-602b9a3744dc,DISK]] 2023-07-21 05:14:22,122 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:22,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 05:14:22,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 05:14:22,123 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 05:14:22,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 05:14:22,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:22,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 05:14:22,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 05:14:22,125 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 05:14:22,127 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/info 2023-07-21 05:14:22,127 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/info 2023-07-21 05:14:22,127 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 05:14:22,138 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 17957644be3a420bb66cba34f3200437 2023-07-21 05:14:22,138 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/info/17957644be3a420bb66cba34f3200437 2023-07-21 05:14:22,138 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:22,138 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 05:14:22,139 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/rep_barrier 2023-07-21 05:14:22,139 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/rep_barrier 2023-07-21 05:14:22,140 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 05:14:22,147 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4522a590c8af44c5a5e7d06d6639367f 2023-07-21 05:14:22,147 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/rep_barrier/4522a590c8af44c5a5e7d06d6639367f 2023-07-21 05:14:22,148 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:22,148 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 05:14:22,149 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/table 2023-07-21 05:14:22,149 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/table 2023-07-21 05:14:22,149 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 05:14:22,159 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2490511508c54014b6f71fdd409b9e01 2023-07-21 05:14:22,159 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/table/2490511508c54014b6f71fdd409b9e01 2023-07-21 05:14:22,159 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:22,160 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740 2023-07-21 05:14:22,162 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740 2023-07-21 05:14:22,165 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 05:14:22,166 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 05:14:22,167 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=99; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11092753120, jitterRate=0.03309313952922821}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 05:14:22,168 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 05:14:22,169 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=79, masterSystemTime=1689916462087 2023-07-21 05:14:22,171 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 05:14:22,171 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 05:14:22,171 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42315,1689916451166, state=OPEN 2023-07-21 05:14:22,172 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 05:14:22,172 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 05:14:22,173 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=ede3ac9f206f1997341b19733c39fd22, regionState=CLOSED 2023-07-21 05:14:22,173 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689916462173"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916462173"}]},"ts":"1689916462173"} 2023-07-21 05:14:22,174 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42093] ipc.CallRunner(144): callId: 185 service: ClientService methodName: Mutate size: 214 connection: 172.31.14.131:50522 deadline: 1689916522174, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42315 startCode=1689916451166. As of locationSeqNum=95. 2023-07-21 05:14:22,175 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=76 2023-07-21 05:14:22,175 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=76, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42315,1689916451166 in 241 msec 2023-07-21 05:14:22,176 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=76, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 771 msec 2023-07-21 05:14:22,279 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=75 2023-07-21 05:14:22,279 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=75, state=SUCCESS; CloseRegionProcedure ede3ac9f206f1997341b19733c39fd22, server=jenkins-hbase4.apache.org,42093,1689916451283 in 869 msec 2023-07-21 05:14:22,279 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=ede3ac9f206f1997341b19733c39fd22, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42315,1689916451166; forceNewPlan=false, retain=false 2023-07-21 05:14:22,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure.ProcedureSyncWait(216): waitFor pid=75 2023-07-21 05:14:22,430 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=ede3ac9f206f1997341b19733c39fd22, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:22,430 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689916462430"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916462430"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916462430"}]},"ts":"1689916462430"} 2023-07-21 05:14:22,432 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=75, state=RUNNABLE; OpenRegionProcedure ede3ac9f206f1997341b19733c39fd22, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:22,588 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:22,588 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ede3ac9f206f1997341b19733c39fd22, NAME => 'hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:22,588 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 05:14:22,589 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. service=MultiRowMutationService 2023-07-21 05:14:22,589 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 05:14:22,589 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:22,589 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:22,589 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:22,589 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:22,591 INFO [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:22,592 DEBUG [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/m 2023-07-21 05:14:22,592 DEBUG [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/m 2023-07-21 05:14:22,592 INFO [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ede3ac9f206f1997341b19733c39fd22 columnFamilyName m 2023-07-21 05:14:22,603 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 05ab717944c24ee68a945b2245eaf3bf 2023-07-21 05:14:22,603 DEBUG [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] regionserver.HStore(539): loaded hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/m/05ab717944c24ee68a945b2245eaf3bf 2023-07-21 05:14:22,610 DEBUG [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] regionserver.HStore(539): loaded hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/m/9b73a5e2511842c197f2fb115fc1d18f 2023-07-21 05:14:22,610 INFO [StoreOpener-ede3ac9f206f1997341b19733c39fd22-1] regionserver.HStore(310): Store=ede3ac9f206f1997341b19733c39fd22/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:22,611 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:22,613 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:22,616 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:22,617 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ede3ac9f206f1997341b19733c39fd22; next sequenceid=36; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@433c0d06, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:22,617 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ede3ac9f206f1997341b19733c39fd22: 2023-07-21 05:14:22,618 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22., pid=80, masterSystemTime=1689916462584 2023-07-21 05:14:22,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:22,620 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:22,620 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=ede3ac9f206f1997341b19733c39fd22, regionState=OPEN, openSeqNum=36, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:22,620 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689916462620"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916462620"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916462620"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916462620"}]},"ts":"1689916462620"} 2023-07-21 05:14:22,625 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=75 2023-07-21 05:14:22,625 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=75, state=SUCCESS; OpenRegionProcedure ede3ac9f206f1997341b19733c39fd22, server=jenkins-hbase4.apache.org,42315,1689916451166 in 191 msec 2023-07-21 05:14:22,626 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=ede3ac9f206f1997341b19733c39fd22, REOPEN/MOVE in 1.2240 sec 2023-07-21 05:14:23,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33541,1689916455330, jenkins-hbase4.apache.org,40677,1689916451367, jenkins-hbase4.apache.org,42093,1689916451283] are moved back to default 2023-07-21 05:14:23,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-21 05:14:23,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:23,408 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42093] ipc.CallRunner(144): callId: 13 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:50570 deadline: 1689916523407, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42315 startCode=1689916451166. As of locationSeqNum=32. 2023-07-21 05:14:23,509 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42093] ipc.CallRunner(144): callId: 14 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:50570 deadline: 1689916523509, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42315 startCode=1689916451166. As of locationSeqNum=95. 2023-07-21 05:14:23,611 DEBUG [hconnection-0x78ef668c-shared-pool-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 05:14:23,619 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50674, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 05:14:23,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:23,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:23,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-21 05:14:23,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:23,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:23,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-21 05:14:23,642 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:23,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-21 05:14:23,643 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42093] ipc.CallRunner(144): callId: 190 service: ClientService methodName: ExecService size: 528 connection: 172.31.14.131:50522 deadline: 1689916523643, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42315 startCode=1689916451166. As of locationSeqNum=32. 2023-07-21 05:14:23,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-21 05:14:23,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-21 05:14:23,748 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:23,748 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 05:14:23,749 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:23,749 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:23,751 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 05:14:23,753 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:23,754 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551 empty. 2023-07-21 05:14:23,754 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:23,754 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-21 05:14:23,778 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:23,780 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => a45f5fe3f5080fcdc3fc607c1e03c551, NAME => 'Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:23,800 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:23,800 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing a45f5fe3f5080fcdc3fc607c1e03c551, disabling compactions & flushes 2023-07-21 05:14:23,800 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:23,800 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:23,800 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. after waiting 0 ms 2023-07-21 05:14:23,800 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:23,801 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:23,801 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for a45f5fe3f5080fcdc3fc607c1e03c551: 2023-07-21 05:14:23,804 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 05:14:23,805 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689916463805"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916463805"}]},"ts":"1689916463805"} 2023-07-21 05:14:23,807 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 05:14:23,808 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 05:14:23,808 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916463808"}]},"ts":"1689916463808"} 2023-07-21 05:14:23,810 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-21 05:14:23,818 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a45f5fe3f5080fcdc3fc607c1e03c551, ASSIGN}] 2023-07-21 05:14:23,822 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a45f5fe3f5080fcdc3fc607c1e03c551, ASSIGN 2023-07-21 05:14:23,824 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a45f5fe3f5080fcdc3fc607c1e03c551, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42315,1689916451166; forceNewPlan=false, retain=false 2023-07-21 05:14:23,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-21 05:14:23,976 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=a45f5fe3f5080fcdc3fc607c1e03c551, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:23,976 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689916463976"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916463976"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916463976"}]},"ts":"1689916463976"} 2023-07-21 05:14:23,978 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure a45f5fe3f5080fcdc3fc607c1e03c551, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:24,133 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:24,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a45f5fe3f5080fcdc3fc607c1e03c551, NAME => 'Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:24,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:24,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:24,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:24,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:24,138 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 05:14:24,139 INFO [StoreOpener-a45f5fe3f5080fcdc3fc607c1e03c551-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:24,146 DEBUG [StoreOpener-a45f5fe3f5080fcdc3fc607c1e03c551-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551/f 2023-07-21 05:14:24,147 DEBUG [StoreOpener-a45f5fe3f5080fcdc3fc607c1e03c551-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551/f 2023-07-21 05:14:24,147 INFO [StoreOpener-a45f5fe3f5080fcdc3fc607c1e03c551-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a45f5fe3f5080fcdc3fc607c1e03c551 columnFamilyName f 2023-07-21 05:14:24,148 INFO [StoreOpener-a45f5fe3f5080fcdc3fc607c1e03c551-1] regionserver.HStore(310): Store=a45f5fe3f5080fcdc3fc607c1e03c551/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:24,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:24,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:24,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:24,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:24,172 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a45f5fe3f5080fcdc3fc607c1e03c551; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11109520320, jitterRate=0.034654706716537476}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:24,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a45f5fe3f5080fcdc3fc607c1e03c551: 2023-07-21 05:14:24,173 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551., pid=83, masterSystemTime=1689916464129 2023-07-21 05:14:24,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:24,178 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:24,179 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=a45f5fe3f5080fcdc3fc607c1e03c551, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:24,179 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689916464179"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916464179"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916464179"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916464179"}]},"ts":"1689916464179"} 2023-07-21 05:14:24,185 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-21 05:14:24,185 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure a45f5fe3f5080fcdc3fc607c1e03c551, server=jenkins-hbase4.apache.org,42315,1689916451166 in 203 msec 2023-07-21 05:14:24,189 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-21 05:14:24,189 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a45f5fe3f5080fcdc3fc607c1e03c551, ASSIGN in 368 msec 2023-07-21 05:14:24,190 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 05:14:24,191 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916464190"}]},"ts":"1689916464190"} 2023-07-21 05:14:24,194 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-21 05:14:24,197 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 05:14:24,200 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 558 msec 2023-07-21 05:14:24,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-21 05:14:24,248 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-21 05:14:24,248 DEBUG [Listener at localhost/34619] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-21 05:14:24,248 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:24,249 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42093] ipc.CallRunner(144): callId: 275 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:50554 deadline: 1689916524249, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42315 startCode=1689916451166. As of locationSeqNum=95. 2023-07-21 05:14:24,352 DEBUG [hconnection-0xfdeaa0f-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 05:14:24,354 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50690, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 05:14:24,360 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-21 05:14:24,361 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:24,361 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-21 05:14:24,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-21 05:14:24,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:24,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 05:14:24,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:24,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:24,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-21 05:14:24,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(345): Moving region a45f5fe3f5080fcdc3fc607c1e03c551 to RSGroup bar 2023-07-21 05:14:24,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:24,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:24,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:24,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:24,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 05:14:24,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:24,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a45f5fe3f5080fcdc3fc607c1e03c551, REOPEN/MOVE 2023-07-21 05:14:24,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-21 05:14:24,371 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a45f5fe3f5080fcdc3fc607c1e03c551, REOPEN/MOVE 2023-07-21 05:14:24,371 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=a45f5fe3f5080fcdc3fc607c1e03c551, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:24,371 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689916464371"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916464371"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916464371"}]},"ts":"1689916464371"} 2023-07-21 05:14:24,373 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure a45f5fe3f5080fcdc3fc607c1e03c551, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:24,525 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:24,526 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a45f5fe3f5080fcdc3fc607c1e03c551, disabling compactions & flushes 2023-07-21 05:14:24,527 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:24,527 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:24,527 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. after waiting 0 ms 2023-07-21 05:14:24,527 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:24,531 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:24,533 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:24,533 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a45f5fe3f5080fcdc3fc607c1e03c551: 2023-07-21 05:14:24,533 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a45f5fe3f5080fcdc3fc607c1e03c551 move to jenkins-hbase4.apache.org,40677,1689916451367 record at close sequenceid=2 2023-07-21 05:14:24,535 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:24,535 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=a45f5fe3f5080fcdc3fc607c1e03c551, regionState=CLOSED 2023-07-21 05:14:24,536 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689916464535"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916464535"}]},"ts":"1689916464535"} 2023-07-21 05:14:24,540 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-21 05:14:24,540 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure a45f5fe3f5080fcdc3fc607c1e03c551, server=jenkins-hbase4.apache.org,42315,1689916451166 in 165 msec 2023-07-21 05:14:24,541 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a45f5fe3f5080fcdc3fc607c1e03c551, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40677,1689916451367; forceNewPlan=false, retain=false 2023-07-21 05:14:24,691 INFO [jenkins-hbase4:42467] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 05:14:24,692 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=a45f5fe3f5080fcdc3fc607c1e03c551, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:24,692 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689916464692"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916464692"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916464692"}]},"ts":"1689916464692"} 2023-07-21 05:14:24,694 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure a45f5fe3f5080fcdc3fc607c1e03c551, server=jenkins-hbase4.apache.org,40677,1689916451367}] 2023-07-21 05:14:24,850 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:24,850 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a45f5fe3f5080fcdc3fc607c1e03c551, NAME => 'Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:24,851 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:24,851 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:24,851 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:24,851 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:24,852 INFO [StoreOpener-a45f5fe3f5080fcdc3fc607c1e03c551-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:24,853 DEBUG [StoreOpener-a45f5fe3f5080fcdc3fc607c1e03c551-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551/f 2023-07-21 05:14:24,853 DEBUG [StoreOpener-a45f5fe3f5080fcdc3fc607c1e03c551-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551/f 2023-07-21 05:14:24,854 INFO [StoreOpener-a45f5fe3f5080fcdc3fc607c1e03c551-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a45f5fe3f5080fcdc3fc607c1e03c551 columnFamilyName f 2023-07-21 05:14:24,854 INFO [StoreOpener-a45f5fe3f5080fcdc3fc607c1e03c551-1] regionserver.HStore(310): Store=a45f5fe3f5080fcdc3fc607c1e03c551/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:24,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:24,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:24,859 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:24,859 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a45f5fe3f5080fcdc3fc607c1e03c551; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10212660960, jitterRate=-0.048871830105781555}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:24,859 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a45f5fe3f5080fcdc3fc607c1e03c551: 2023-07-21 05:14:24,860 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551., pid=86, masterSystemTime=1689916464846 2023-07-21 05:14:24,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:24,862 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:24,862 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=a45f5fe3f5080fcdc3fc607c1e03c551, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:24,862 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689916464862"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916464862"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916464862"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916464862"}]},"ts":"1689916464862"} 2023-07-21 05:14:24,865 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-21 05:14:24,865 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure a45f5fe3f5080fcdc3fc607c1e03c551, server=jenkins-hbase4.apache.org,40677,1689916451367 in 170 msec 2023-07-21 05:14:24,866 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a45f5fe3f5080fcdc3fc607c1e03c551, REOPEN/MOVE in 496 msec 2023-07-21 05:14:25,303 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-21 05:14:25,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-21 05:14:25,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-21 05:14:25,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:25,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:25,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:25,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-21 05:14:25,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:25,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-21 05:14:25,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:25,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 285 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:40408 deadline: 1689917665380, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-21 05:14:25,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:42093] to rsgroup default 2023-07-21 05:14:25,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:25,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:40408 deadline: 1689917665382, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-21 05:14:25,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-21 05:14:25,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:25,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 05:14:25,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:25,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:25,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-21 05:14:25,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(345): Moving region a45f5fe3f5080fcdc3fc607c1e03c551 to RSGroup default 2023-07-21 05:14:25,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a45f5fe3f5080fcdc3fc607c1e03c551, REOPEN/MOVE 2023-07-21 05:14:25,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 05:14:25,394 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a45f5fe3f5080fcdc3fc607c1e03c551, REOPEN/MOVE 2023-07-21 05:14:25,395 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=a45f5fe3f5080fcdc3fc607c1e03c551, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:25,395 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689916465395"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916465395"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916465395"}]},"ts":"1689916465395"} 2023-07-21 05:14:25,400 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure a45f5fe3f5080fcdc3fc607c1e03c551, server=jenkins-hbase4.apache.org,40677,1689916451367}] 2023-07-21 05:14:25,554 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:25,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a45f5fe3f5080fcdc3fc607c1e03c551, disabling compactions & flushes 2023-07-21 05:14:25,557 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:25,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:25,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. after waiting 0 ms 2023-07-21 05:14:25,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:25,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 05:14:25,564 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:25,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a45f5fe3f5080fcdc3fc607c1e03c551: 2023-07-21 05:14:25,564 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a45f5fe3f5080fcdc3fc607c1e03c551 move to jenkins-hbase4.apache.org,42315,1689916451166 record at close sequenceid=5 2023-07-21 05:14:25,566 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:25,567 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=a45f5fe3f5080fcdc3fc607c1e03c551, regionState=CLOSED 2023-07-21 05:14:25,567 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689916465567"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916465567"}]},"ts":"1689916465567"} 2023-07-21 05:14:25,574 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-21 05:14:25,575 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure a45f5fe3f5080fcdc3fc607c1e03c551, server=jenkins-hbase4.apache.org,40677,1689916451367 in 172 msec 2023-07-21 05:14:25,575 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a45f5fe3f5080fcdc3fc607c1e03c551, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42315,1689916451166; forceNewPlan=false, retain=false 2023-07-21 05:14:25,726 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=a45f5fe3f5080fcdc3fc607c1e03c551, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:25,726 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689916465726"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916465726"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916465726"}]},"ts":"1689916465726"} 2023-07-21 05:14:25,728 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure a45f5fe3f5080fcdc3fc607c1e03c551, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:25,886 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:25,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a45f5fe3f5080fcdc3fc607c1e03c551, NAME => 'Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:25,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:25,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:25,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:25,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:25,888 INFO [StoreOpener-a45f5fe3f5080fcdc3fc607c1e03c551-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:25,889 DEBUG [StoreOpener-a45f5fe3f5080fcdc3fc607c1e03c551-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551/f 2023-07-21 05:14:25,889 DEBUG [StoreOpener-a45f5fe3f5080fcdc3fc607c1e03c551-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551/f 2023-07-21 05:14:25,889 INFO [StoreOpener-a45f5fe3f5080fcdc3fc607c1e03c551-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a45f5fe3f5080fcdc3fc607c1e03c551 columnFamilyName f 2023-07-21 05:14:25,890 INFO [StoreOpener-a45f5fe3f5080fcdc3fc607c1e03c551-1] regionserver.HStore(310): Store=a45f5fe3f5080fcdc3fc607c1e03c551/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:25,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:25,892 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:25,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:25,896 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a45f5fe3f5080fcdc3fc607c1e03c551; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10472807360, jitterRate=-0.024643808603286743}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:25,896 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a45f5fe3f5080fcdc3fc607c1e03c551: 2023-07-21 05:14:25,897 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551., pid=89, masterSystemTime=1689916465882 2023-07-21 05:14:25,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:25,899 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:25,899 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=a45f5fe3f5080fcdc3fc607c1e03c551, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:25,899 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689916465899"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916465899"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916465899"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916465899"}]},"ts":"1689916465899"} 2023-07-21 05:14:25,902 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-21 05:14:25,902 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure a45f5fe3f5080fcdc3fc607c1e03c551, server=jenkins-hbase4.apache.org,42315,1689916451166 in 173 msec 2023-07-21 05:14:25,903 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a45f5fe3f5080fcdc3fc607c1e03c551, REOPEN/MOVE in 511 msec 2023-07-21 05:14:26,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-21 05:14:26,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-21 05:14:26,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:26,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:26,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:26,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-21 05:14:26,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:26,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 294 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:40408 deadline: 1689917666402, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-21 05:14:26,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:42093] to rsgroup default 2023-07-21 05:14:26,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:26,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 05:14:26,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:26,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:26,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-21 05:14:26,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33541,1689916455330, jenkins-hbase4.apache.org,40677,1689916451367, jenkins-hbase4.apache.org,42093,1689916451283] are moved back to bar 2023-07-21 05:14:26,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-21 05:14:26,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:26,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:26,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:26,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-21 05:14:26,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:26,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:26,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 05:14:26,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:26,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:26,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:26,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:26,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:26,432 INFO [Listener at localhost/34619] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-21 05:14:26,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-21 05:14:26,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-21 05:14:26,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-21 05:14:26,436 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916466436"}]},"ts":"1689916466436"} 2023-07-21 05:14:26,438 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-21 05:14:26,439 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-21 05:14:26,440 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a45f5fe3f5080fcdc3fc607c1e03c551, UNASSIGN}] 2023-07-21 05:14:26,441 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a45f5fe3f5080fcdc3fc607c1e03c551, UNASSIGN 2023-07-21 05:14:26,442 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=a45f5fe3f5080fcdc3fc607c1e03c551, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:26,442 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689916466442"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916466442"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916466442"}]},"ts":"1689916466442"} 2023-07-21 05:14:26,443 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure a45f5fe3f5080fcdc3fc607c1e03c551, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:26,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-21 05:14:26,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:26,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a45f5fe3f5080fcdc3fc607c1e03c551, disabling compactions & flushes 2023-07-21 05:14:26,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:26,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:26,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. after waiting 0 ms 2023-07-21 05:14:26,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:26,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 05:14:26,605 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551. 2023-07-21 05:14:26,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a45f5fe3f5080fcdc3fc607c1e03c551: 2023-07-21 05:14:26,607 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:26,611 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=a45f5fe3f5080fcdc3fc607c1e03c551, regionState=CLOSED 2023-07-21 05:14:26,611 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689916466610"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916466610"}]},"ts":"1689916466610"} 2023-07-21 05:14:26,615 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-21 05:14:26,616 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure a45f5fe3f5080fcdc3fc607c1e03c551, server=jenkins-hbase4.apache.org,42315,1689916451166 in 170 msec 2023-07-21 05:14:26,620 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-21 05:14:26,620 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a45f5fe3f5080fcdc3fc607c1e03c551, UNASSIGN in 176 msec 2023-07-21 05:14:26,621 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916466621"}]},"ts":"1689916466621"} 2023-07-21 05:14:26,622 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-21 05:14:26,624 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-21 05:14:26,626 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 192 msec 2023-07-21 05:14:26,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-21 05:14:26,739 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-21 05:14:26,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-21 05:14:26,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 05:14:26,744 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 05:14:26,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-21 05:14:26,745 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 05:14:26,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:26,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:26,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:26,752 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:26,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-21 05:14:26,757 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551/recovered.edits] 2023-07-21 05:14:26,767 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551/recovered.edits/10.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551/recovered.edits/10.seqid 2023-07-21 05:14:26,769 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testFailRemoveGroup/a45f5fe3f5080fcdc3fc607c1e03c551 2023-07-21 05:14:26,769 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-21 05:14:26,773 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 05:14:26,777 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-21 05:14:26,780 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-21 05:14:26,782 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 05:14:26,782 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-21 05:14:26,782 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916466782"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:26,788 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 05:14:26,788 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => a45f5fe3f5080fcdc3fc607c1e03c551, NAME => 'Group_testFailRemoveGroup,,1689916463639.a45f5fe3f5080fcdc3fc607c1e03c551.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 05:14:26,789 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-21 05:14:26,789 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689916466789"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:26,792 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-21 05:14:26,796 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 05:14:26,807 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 55 msec 2023-07-21 05:14:26,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-21 05:14:26,857 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-21 05:14:26,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:26,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:26,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:26,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:26,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:26,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:26,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:26,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:26,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:26,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:26,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:26,880 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:26,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:26,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:26,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:26,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:26,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:26,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:26,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:26,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42467] to rsgroup master 2023-07-21 05:14:26,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:26,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 342 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40408 deadline: 1689917666894, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. 2023-07-21 05:14:26,895 WARN [Listener at localhost/34619] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:26,897 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:26,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:26,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:26,898 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:42093, jenkins-hbase4.apache.org:42315], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:26,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:26,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:26,918 INFO [Listener at localhost/34619] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=515 (was 499) Potentially hanging thread: hconnection-0x2f3aeb7a-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-491990667-172.31.14.131-1689916444933:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f3aeb7a-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x78ef668c-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/cluster_4e79e38a-666f-bc42-a998-45a19ecc7c64/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/cluster_4e79e38a-666f-bc42-a998-45a19ecc7c64/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/cluster_4e79e38a-666f-bc42-a998-45a19ecc7c64/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x78ef668c-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f3aeb7a-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x78ef668c-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-284496683_17 at /127.0.0.1:57670 [Receiving block BP-491990667-172.31.14.131-1689916444933:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1412302068_17 at /127.0.0.1:36240 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f3aeb7a-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-145937958_17 at /127.0.0.1:43436 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf-prefix:jenkins-hbase4.apache.org,42315,1689916451166.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x78ef668c-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x78ef668c-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f3aeb7a-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-491990667-172.31.14.131-1689916444933:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-284496683_17 at /127.0.0.1:36236 [Receiving block BP-491990667-172.31.14.131-1689916444933:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-284496683_17 at /127.0.0.1:43414 [Receiving block BP-491990667-172.31.14.131-1689916444933:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-491990667-172.31.14.131-1689916444933:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xfdeaa0f-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-284496683_17 at /127.0.0.1:57680 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/cluster_4e79e38a-666f-bc42-a998-45a19ecc7c64/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f3aeb7a-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=790 (was 768) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=484 (was 501), ProcessCount=174 (was 178), AvailableMemoryMB=3912 (was 4224) 2023-07-21 05:14:26,919 WARN [Listener at localhost/34619] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-21 05:14:26,940 INFO [Listener at localhost/34619] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=515, OpenFileDescriptor=790, MaxFileDescriptor=60000, SystemLoadAverage=484, ProcessCount=174, AvailableMemoryMB=3910 2023-07-21 05:14:26,940 WARN [Listener at localhost/34619] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-21 05:14:26,940 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-21 05:14:26,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:26,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:26,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:26,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:26,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:26,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:26,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:26,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:26,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:26,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:26,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:26,964 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:26,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:26,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:26,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:26,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:26,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:26,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:26,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:26,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42467] to rsgroup master 2023-07-21 05:14:26,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:26,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 370 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40408 deadline: 1689917666980, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. 2023-07-21 05:14:26,981 WARN [Listener at localhost/34619] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:26,987 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:26,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:26,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:26,989 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:42093, jenkins-hbase4.apache.org:42315], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:26,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:26,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:26,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:26,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:26,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_635189350 2023-07-21 05:14:26,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:26,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_635189350 2023-07-21 05:14:26,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:26,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:27,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:27,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:27,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:27,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33541] to rsgroup Group_testMultiTableMove_635189350 2023-07-21 05:14:27,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:27,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_635189350 2023-07-21 05:14:27,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:27,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:27,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 05:14:27,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33541,1689916455330] are moved back to default 2023-07-21 05:14:27,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_635189350 2023-07-21 05:14:27,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:27,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:27,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:27,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_635189350 2023-07-21 05:14:27,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:27,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:27,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 05:14:27,034 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:27,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-21 05:14:27,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 05:14:27,037 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:27,037 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_635189350 2023-07-21 05:14:27,038 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:27,038 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:27,044 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 05:14:27,046 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:27,047 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e empty. 2023-07-21 05:14:27,048 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:27,048 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-21 05:14:27,083 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:27,097 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2f8e94a4b2fb120cc3f31e47523a8e9e, NAME => 'GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:27,122 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:27,123 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 2f8e94a4b2fb120cc3f31e47523a8e9e, disabling compactions & flushes 2023-07-21 05:14:27,123 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:27,123 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:27,123 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. after waiting 0 ms 2023-07-21 05:14:27,123 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:27,123 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:27,123 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 2f8e94a4b2fb120cc3f31e47523a8e9e: 2023-07-21 05:14:27,126 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 05:14:27,127 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916467127"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916467127"}]},"ts":"1689916467127"} 2023-07-21 05:14:27,129 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 05:14:27,130 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 05:14:27,130 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916467130"}]},"ts":"1689916467130"} 2023-07-21 05:14:27,131 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-21 05:14:27,135 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:27,135 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:27,135 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:27,135 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:27,135 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:27,135 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2f8e94a4b2fb120cc3f31e47523a8e9e, ASSIGN}] 2023-07-21 05:14:27,137 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2f8e94a4b2fb120cc3f31e47523a8e9e, ASSIGN 2023-07-21 05:14:27,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 05:14:27,138 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2f8e94a4b2fb120cc3f31e47523a8e9e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42315,1689916451166; forceNewPlan=false, retain=false 2023-07-21 05:14:27,288 INFO [jenkins-hbase4:42467] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 05:14:27,290 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=2f8e94a4b2fb120cc3f31e47523a8e9e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:27,290 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916467290"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916467290"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916467290"}]},"ts":"1689916467290"} 2023-07-21 05:14:27,292 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure 2f8e94a4b2fb120cc3f31e47523a8e9e, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:27,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 05:14:27,449 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:27,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2f8e94a4b2fb120cc3f31e47523a8e9e, NAME => 'GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:27,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:27,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:27,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:27,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:27,452 INFO [StoreOpener-2f8e94a4b2fb120cc3f31e47523a8e9e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:27,454 DEBUG [StoreOpener-2f8e94a4b2fb120cc3f31e47523a8e9e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e/f 2023-07-21 05:14:27,454 DEBUG [StoreOpener-2f8e94a4b2fb120cc3f31e47523a8e9e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e/f 2023-07-21 05:14:27,454 INFO [StoreOpener-2f8e94a4b2fb120cc3f31e47523a8e9e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2f8e94a4b2fb120cc3f31e47523a8e9e columnFamilyName f 2023-07-21 05:14:27,457 INFO [StoreOpener-2f8e94a4b2fb120cc3f31e47523a8e9e-1] regionserver.HStore(310): Store=2f8e94a4b2fb120cc3f31e47523a8e9e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:27,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:27,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:27,461 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:27,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 05:14:27,662 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:27,663 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2f8e94a4b2fb120cc3f31e47523a8e9e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10203076160, jitterRate=-0.049764484167099}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:27,663 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2f8e94a4b2fb120cc3f31e47523a8e9e: 2023-07-21 05:14:27,664 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e., pid=96, masterSystemTime=1689916467444 2023-07-21 05:14:27,666 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:27,666 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:27,668 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=2f8e94a4b2fb120cc3f31e47523a8e9e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:27,668 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916467668"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916467668"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916467668"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916467668"}]},"ts":"1689916467668"} 2023-07-21 05:14:27,673 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-21 05:14:27,673 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure 2f8e94a4b2fb120cc3f31e47523a8e9e, server=jenkins-hbase4.apache.org,42315,1689916451166 in 378 msec 2023-07-21 05:14:27,676 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-21 05:14:27,676 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2f8e94a4b2fb120cc3f31e47523a8e9e, ASSIGN in 538 msec 2023-07-21 05:14:27,677 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 05:14:27,677 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916467677"}]},"ts":"1689916467677"} 2023-07-21 05:14:27,679 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-21 05:14:27,682 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 05:14:27,684 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 652 msec 2023-07-21 05:14:28,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-21 05:14:28,163 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-21 05:14:28,163 DEBUG [Listener at localhost/34619] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-21 05:14:28,163 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:28,167 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-21 05:14:28,167 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:28,167 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-21 05:14:28,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:28,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 05:14:28,173 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:28,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-21 05:14:28,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 05:14:28,179 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:28,180 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_635189350 2023-07-21 05:14:28,180 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:28,181 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:28,184 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 05:14:28,186 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:28,187 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86 empty. 2023-07-21 05:14:28,187 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:28,187 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-21 05:14:28,213 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:28,215 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 179f7f61354c015d36d8b2a10c856f86, NAME => 'GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:28,227 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:28,228 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 179f7f61354c015d36d8b2a10c856f86, disabling compactions & flushes 2023-07-21 05:14:28,228 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:28,228 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:28,228 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. after waiting 0 ms 2023-07-21 05:14:28,228 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:28,228 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:28,228 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 179f7f61354c015d36d8b2a10c856f86: 2023-07-21 05:14:28,230 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 05:14:28,232 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916468231"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916468231"}]},"ts":"1689916468231"} 2023-07-21 05:14:28,235 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 05:14:28,237 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 05:14:28,237 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916468237"}]},"ts":"1689916468237"} 2023-07-21 05:14:28,238 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-21 05:14:28,247 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:28,247 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:28,247 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:28,247 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:28,247 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:28,247 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=179f7f61354c015d36d8b2a10c856f86, ASSIGN}] 2023-07-21 05:14:28,249 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=179f7f61354c015d36d8b2a10c856f86, ASSIGN 2023-07-21 05:14:28,250 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=179f7f61354c015d36d8b2a10c856f86, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42093,1689916451283; forceNewPlan=false, retain=false 2023-07-21 05:14:28,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 05:14:28,401 INFO [jenkins-hbase4:42467] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 05:14:28,402 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=179f7f61354c015d36d8b2a10c856f86, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:28,402 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916468402"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916468402"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916468402"}]},"ts":"1689916468402"} 2023-07-21 05:14:28,404 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 179f7f61354c015d36d8b2a10c856f86, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:28,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 05:14:28,560 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:28,560 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 179f7f61354c015d36d8b2a10c856f86, NAME => 'GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:28,561 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:28,561 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:28,561 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:28,561 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:28,562 INFO [StoreOpener-179f7f61354c015d36d8b2a10c856f86-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:28,564 DEBUG [StoreOpener-179f7f61354c015d36d8b2a10c856f86-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86/f 2023-07-21 05:14:28,564 DEBUG [StoreOpener-179f7f61354c015d36d8b2a10c856f86-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86/f 2023-07-21 05:14:28,565 INFO [StoreOpener-179f7f61354c015d36d8b2a10c856f86-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 179f7f61354c015d36d8b2a10c856f86 columnFamilyName f 2023-07-21 05:14:28,565 INFO [StoreOpener-179f7f61354c015d36d8b2a10c856f86-1] regionserver.HStore(310): Store=179f7f61354c015d36d8b2a10c856f86/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:28,566 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:28,567 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:28,570 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:28,573 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:28,573 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 179f7f61354c015d36d8b2a10c856f86; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9585100480, jitterRate=-0.1073179543018341}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:28,573 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 179f7f61354c015d36d8b2a10c856f86: 2023-07-21 05:14:28,574 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86., pid=99, masterSystemTime=1689916468556 2023-07-21 05:14:28,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:28,576 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:28,576 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=179f7f61354c015d36d8b2a10c856f86, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:28,576 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916468576"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916468576"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916468576"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916468576"}]},"ts":"1689916468576"} 2023-07-21 05:14:28,579 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-21 05:14:28,579 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 179f7f61354c015d36d8b2a10c856f86, server=jenkins-hbase4.apache.org,42093,1689916451283 in 174 msec 2023-07-21 05:14:28,581 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-21 05:14:28,581 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=179f7f61354c015d36d8b2a10c856f86, ASSIGN in 332 msec 2023-07-21 05:14:28,582 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 05:14:28,582 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916468582"}]},"ts":"1689916468582"} 2023-07-21 05:14:28,583 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-21 05:14:28,585 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 05:14:28,587 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 415 msec 2023-07-21 05:14:28,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 05:14:28,778 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-21 05:14:28,778 DEBUG [Listener at localhost/34619] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-21 05:14:28,778 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:28,785 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-21 05:14:28,785 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:28,785 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-21 05:14:28,786 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:28,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-21 05:14:28,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 05:14:28,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-21 05:14:28,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 05:14:28,803 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_635189350 2023-07-21 05:14:28,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_635189350 2023-07-21 05:14:28,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:28,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_635189350 2023-07-21 05:14:28,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:28,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:28,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_635189350 2023-07-21 05:14:28,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(345): Moving region 179f7f61354c015d36d8b2a10c856f86 to RSGroup Group_testMultiTableMove_635189350 2023-07-21 05:14:28,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=179f7f61354c015d36d8b2a10c856f86, REOPEN/MOVE 2023-07-21 05:14:28,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_635189350 2023-07-21 05:14:28,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(345): Moving region 2f8e94a4b2fb120cc3f31e47523a8e9e to RSGroup Group_testMultiTableMove_635189350 2023-07-21 05:14:28,827 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=179f7f61354c015d36d8b2a10c856f86, REOPEN/MOVE 2023-07-21 05:14:28,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2f8e94a4b2fb120cc3f31e47523a8e9e, REOPEN/MOVE 2023-07-21 05:14:28,829 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=179f7f61354c015d36d8b2a10c856f86, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:28,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_635189350, current retry=0 2023-07-21 05:14:28,830 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916468829"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916468829"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916468829"}]},"ts":"1689916468829"} 2023-07-21 05:14:28,831 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2f8e94a4b2fb120cc3f31e47523a8e9e, REOPEN/MOVE 2023-07-21 05:14:28,832 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=2f8e94a4b2fb120cc3f31e47523a8e9e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:28,832 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916468832"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916468832"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916468832"}]},"ts":"1689916468832"} 2023-07-21 05:14:28,833 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure 179f7f61354c015d36d8b2a10c856f86, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:28,834 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure 2f8e94a4b2fb120cc3f31e47523a8e9e, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:28,986 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:28,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 179f7f61354c015d36d8b2a10c856f86, disabling compactions & flushes 2023-07-21 05:14:28,988 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:28,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:28,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. after waiting 0 ms 2023-07-21 05:14:28,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:28,988 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:28,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2f8e94a4b2fb120cc3f31e47523a8e9e, disabling compactions & flushes 2023-07-21 05:14:28,990 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:28,990 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:28,990 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. after waiting 0 ms 2023-07-21 05:14:28,990 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:28,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:28,999 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:28,999 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2f8e94a4b2fb120cc3f31e47523a8e9e: 2023-07-21 05:14:28,999 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2f8e94a4b2fb120cc3f31e47523a8e9e move to jenkins-hbase4.apache.org,33541,1689916455330 record at close sequenceid=2 2023-07-21 05:14:29,002 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:29,002 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:29,003 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:29,003 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 179f7f61354c015d36d8b2a10c856f86: 2023-07-21 05:14:29,003 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 179f7f61354c015d36d8b2a10c856f86 move to jenkins-hbase4.apache.org,33541,1689916455330 record at close sequenceid=2 2023-07-21 05:14:29,006 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=2f8e94a4b2fb120cc3f31e47523a8e9e, regionState=CLOSED 2023-07-21 05:14:29,006 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916469006"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916469006"}]},"ts":"1689916469006"} 2023-07-21 05:14:29,008 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:29,009 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=179f7f61354c015d36d8b2a10c856f86, regionState=CLOSED 2023-07-21 05:14:29,009 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916469009"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916469009"}]},"ts":"1689916469009"} 2023-07-21 05:14:29,017 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-21 05:14:29,017 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-21 05:14:29,017 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure 179f7f61354c015d36d8b2a10c856f86, server=jenkins-hbase4.apache.org,42093,1689916451283 in 179 msec 2023-07-21 05:14:29,017 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure 2f8e94a4b2fb120cc3f31e47523a8e9e, server=jenkins-hbase4.apache.org,42315,1689916451166 in 178 msec 2023-07-21 05:14:29,019 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2f8e94a4b2fb120cc3f31e47523a8e9e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33541,1689916455330; forceNewPlan=false, retain=false 2023-07-21 05:14:29,019 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=179f7f61354c015d36d8b2a10c856f86, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33541,1689916455330; forceNewPlan=false, retain=false 2023-07-21 05:14:29,169 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=179f7f61354c015d36d8b2a10c856f86, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:29,169 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=2f8e94a4b2fb120cc3f31e47523a8e9e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:29,170 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916469169"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916469169"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916469169"}]},"ts":"1689916469169"} 2023-07-21 05:14:29,170 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916469169"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916469169"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916469169"}]},"ts":"1689916469169"} 2023-07-21 05:14:29,172 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=100, state=RUNNABLE; OpenRegionProcedure 179f7f61354c015d36d8b2a10c856f86, server=jenkins-hbase4.apache.org,33541,1689916455330}] 2023-07-21 05:14:29,173 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=101, state=RUNNABLE; OpenRegionProcedure 2f8e94a4b2fb120cc3f31e47523a8e9e, server=jenkins-hbase4.apache.org,33541,1689916455330}] 2023-07-21 05:14:29,330 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:29,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2f8e94a4b2fb120cc3f31e47523a8e9e, NAME => 'GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:29,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:29,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:29,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:29,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:29,332 INFO [StoreOpener-2f8e94a4b2fb120cc3f31e47523a8e9e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:29,333 DEBUG [StoreOpener-2f8e94a4b2fb120cc3f31e47523a8e9e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e/f 2023-07-21 05:14:29,333 DEBUG [StoreOpener-2f8e94a4b2fb120cc3f31e47523a8e9e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e/f 2023-07-21 05:14:29,333 INFO [StoreOpener-2f8e94a4b2fb120cc3f31e47523a8e9e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2f8e94a4b2fb120cc3f31e47523a8e9e columnFamilyName f 2023-07-21 05:14:29,334 INFO [StoreOpener-2f8e94a4b2fb120cc3f31e47523a8e9e-1] regionserver.HStore(310): Store=2f8e94a4b2fb120cc3f31e47523a8e9e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:29,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:29,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:29,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:29,339 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2f8e94a4b2fb120cc3f31e47523a8e9e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10384008000, jitterRate=-0.03291389346122742}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:29,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2f8e94a4b2fb120cc3f31e47523a8e9e: 2023-07-21 05:14:29,340 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e., pid=105, masterSystemTime=1689916469325 2023-07-21 05:14:29,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:29,341 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:29,341 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:29,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 179f7f61354c015d36d8b2a10c856f86, NAME => 'GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:29,342 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=2f8e94a4b2fb120cc3f31e47523a8e9e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:29,342 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916469341"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916469341"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916469341"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916469341"}]},"ts":"1689916469341"} 2023-07-21 05:14:29,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:29,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:29,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:29,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:29,343 INFO [StoreOpener-179f7f61354c015d36d8b2a10c856f86-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:29,344 DEBUG [StoreOpener-179f7f61354c015d36d8b2a10c856f86-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86/f 2023-07-21 05:14:29,345 DEBUG [StoreOpener-179f7f61354c015d36d8b2a10c856f86-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86/f 2023-07-21 05:14:29,345 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=101 2023-07-21 05:14:29,345 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=101, state=SUCCESS; OpenRegionProcedure 2f8e94a4b2fb120cc3f31e47523a8e9e, server=jenkins-hbase4.apache.org,33541,1689916455330 in 171 msec 2023-07-21 05:14:29,345 INFO [StoreOpener-179f7f61354c015d36d8b2a10c856f86-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 179f7f61354c015d36d8b2a10c856f86 columnFamilyName f 2023-07-21 05:14:29,346 INFO [StoreOpener-179f7f61354c015d36d8b2a10c856f86-1] regionserver.HStore(310): Store=179f7f61354c015d36d8b2a10c856f86/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:29,346 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2f8e94a4b2fb120cc3f31e47523a8e9e, REOPEN/MOVE in 517 msec 2023-07-21 05:14:29,347 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:29,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:29,350 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:29,351 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 179f7f61354c015d36d8b2a10c856f86; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11057435200, jitterRate=0.02980390191078186}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:29,351 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 179f7f61354c015d36d8b2a10c856f86: 2023-07-21 05:14:29,352 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86., pid=104, masterSystemTime=1689916469325 2023-07-21 05:14:29,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:29,353 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:29,354 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=179f7f61354c015d36d8b2a10c856f86, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:29,354 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916469354"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916469354"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916469354"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916469354"}]},"ts":"1689916469354"} 2023-07-21 05:14:29,357 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=100 2023-07-21 05:14:29,357 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=100, state=SUCCESS; OpenRegionProcedure 179f7f61354c015d36d8b2a10c856f86, server=jenkins-hbase4.apache.org,33541,1689916455330 in 184 msec 2023-07-21 05:14:29,358 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=179f7f61354c015d36d8b2a10c856f86, REOPEN/MOVE in 544 msec 2023-07-21 05:14:29,637 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 05:14:29,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-21 05:14:29,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_635189350. 2023-07-21 05:14:29,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:29,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:29,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:29,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-21 05:14:29,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 05:14:29,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-21 05:14:29,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 05:14:29,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:29,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:29,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_635189350 2023-07-21 05:14:29,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:29,854 INFO [Listener at localhost/34619] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-21 05:14:29,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-21 05:14:29,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 05:14:29,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-21 05:14:29,859 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916469858"}]},"ts":"1689916469858"} 2023-07-21 05:14:29,860 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-21 05:14:29,862 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-21 05:14:29,866 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2f8e94a4b2fb120cc3f31e47523a8e9e, UNASSIGN}] 2023-07-21 05:14:29,868 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2f8e94a4b2fb120cc3f31e47523a8e9e, UNASSIGN 2023-07-21 05:14:29,868 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=2f8e94a4b2fb120cc3f31e47523a8e9e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:29,868 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916469868"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916469868"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916469868"}]},"ts":"1689916469868"} 2023-07-21 05:14:29,870 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure 2f8e94a4b2fb120cc3f31e47523a8e9e, server=jenkins-hbase4.apache.org,33541,1689916455330}] 2023-07-21 05:14:29,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-21 05:14:30,024 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:30,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2f8e94a4b2fb120cc3f31e47523a8e9e, disabling compactions & flushes 2023-07-21 05:14:30,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:30,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:30,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. after waiting 0 ms 2023-07-21 05:14:30,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:30,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 05:14:30,031 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e. 2023-07-21 05:14:30,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2f8e94a4b2fb120cc3f31e47523a8e9e: 2023-07-21 05:14:30,033 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:30,034 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=2f8e94a4b2fb120cc3f31e47523a8e9e, regionState=CLOSED 2023-07-21 05:14:30,034 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916470033"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916470033"}]},"ts":"1689916470033"} 2023-07-21 05:14:30,036 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-21 05:14:30,036 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure 2f8e94a4b2fb120cc3f31e47523a8e9e, server=jenkins-hbase4.apache.org,33541,1689916455330 in 165 msec 2023-07-21 05:14:30,038 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-21 05:14:30,038 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2f8e94a4b2fb120cc3f31e47523a8e9e, UNASSIGN in 173 msec 2023-07-21 05:14:30,039 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916470038"}]},"ts":"1689916470038"} 2023-07-21 05:14:30,040 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-21 05:14:30,041 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-21 05:14:30,043 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 187 msec 2023-07-21 05:14:30,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-21 05:14:30,161 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-21 05:14:30,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-21 05:14:30,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 05:14:30,166 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 05:14:30,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_635189350' 2023-07-21 05:14:30,167 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 05:14:30,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_635189350 2023-07-21 05:14:30,172 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:30,174 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e/recovered.edits] 2023-07-21 05:14:30,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:30,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:30,182 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e/recovered.edits/7.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e/recovered.edits/7.seqid 2023-07-21 05:14:30,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-21 05:14:30,182 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveA/2f8e94a4b2fb120cc3f31e47523a8e9e 2023-07-21 05:14:30,183 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-21 05:14:30,191 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 05:14:30,193 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-21 05:14:30,195 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-21 05:14:30,203 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 05:14:30,203 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-21 05:14:30,203 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916470203"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:30,205 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 05:14:30,205 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 2f8e94a4b2fb120cc3f31e47523a8e9e, NAME => 'GrouptestMultiTableMoveA,,1689916467030.2f8e94a4b2fb120cc3f31e47523a8e9e.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 05:14:30,205 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-21 05:14:30,205 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689916470205"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:30,207 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-21 05:14:30,209 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 05:14:30,210 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 47 msec 2023-07-21 05:14:30,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-21 05:14:30,283 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-21 05:14:30,284 INFO [Listener at localhost/34619] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-21 05:14:30,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-21 05:14:30,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 05:14:30,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-21 05:14:30,289 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916470288"}]},"ts":"1689916470288"} 2023-07-21 05:14:30,290 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-21 05:14:30,292 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-21 05:14:30,293 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=179f7f61354c015d36d8b2a10c856f86, UNASSIGN}] 2023-07-21 05:14:30,295 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=179f7f61354c015d36d8b2a10c856f86, UNASSIGN 2023-07-21 05:14:30,295 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=179f7f61354c015d36d8b2a10c856f86, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:30,296 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916470295"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916470295"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916470295"}]},"ts":"1689916470295"} 2023-07-21 05:14:30,297 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 179f7f61354c015d36d8b2a10c856f86, server=jenkins-hbase4.apache.org,33541,1689916455330}] 2023-07-21 05:14:30,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-21 05:14:30,449 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:30,451 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 179f7f61354c015d36d8b2a10c856f86, disabling compactions & flushes 2023-07-21 05:14:30,451 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:30,451 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:30,451 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. after waiting 0 ms 2023-07-21 05:14:30,451 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:30,455 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 05:14:30,456 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86. 2023-07-21 05:14:30,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 179f7f61354c015d36d8b2a10c856f86: 2023-07-21 05:14:30,458 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:30,458 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=179f7f61354c015d36d8b2a10c856f86, regionState=CLOSED 2023-07-21 05:14:30,459 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689916470458"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916470458"}]},"ts":"1689916470458"} 2023-07-21 05:14:30,461 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-21 05:14:30,462 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 179f7f61354c015d36d8b2a10c856f86, server=jenkins-hbase4.apache.org,33541,1689916455330 in 163 msec 2023-07-21 05:14:30,463 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-21 05:14:30,463 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=179f7f61354c015d36d8b2a10c856f86, UNASSIGN in 168 msec 2023-07-21 05:14:30,464 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916470464"}]},"ts":"1689916470464"} 2023-07-21 05:14:30,465 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-21 05:14:30,467 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-21 05:14:30,468 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 183 msec 2023-07-21 05:14:30,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-21 05:14:30,590 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-21 05:14:30,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-21 05:14:30,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 05:14:30,594 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 05:14:30,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_635189350' 2023-07-21 05:14:30,595 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 05:14:30,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_635189350 2023-07-21 05:14:30,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:30,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:30,599 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:30,601 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86/recovered.edits] 2023-07-21 05:14:30,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-21 05:14:30,611 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86/recovered.edits/7.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86/recovered.edits/7.seqid 2023-07-21 05:14:30,613 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/GrouptestMultiTableMoveB/179f7f61354c015d36d8b2a10c856f86 2023-07-21 05:14:30,613 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-21 05:14:30,616 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 05:14:30,618 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-21 05:14:30,620 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-21 05:14:30,621 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 05:14:30,621 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-21 05:14:30,621 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916470621"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:30,623 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 05:14:30,623 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 179f7f61354c015d36d8b2a10c856f86, NAME => 'GrouptestMultiTableMoveB,,1689916468169.179f7f61354c015d36d8b2a10c856f86.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 05:14:30,623 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-21 05:14:30,623 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689916470623"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:30,625 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-21 05:14:30,627 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 05:14:30,628 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 36 msec 2023-07-21 05:14:30,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-21 05:14:30,706 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-21 05:14:30,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:30,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:30,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:30,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:30,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:30,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33541] to rsgroup default 2023-07-21 05:14:30,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_635189350 2023-07-21 05:14:30,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:30,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:30,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_635189350, current retry=0 2023-07-21 05:14:30,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33541,1689916455330] are moved back to Group_testMultiTableMove_635189350 2023-07-21 05:14:30,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_635189350 => default 2023-07-21 05:14:30,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:30,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_635189350 2023-07-21 05:14:30,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:30,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 05:14:30,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:30,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:30,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:30,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:30,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:30,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:30,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:30,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:30,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:30,732 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:30,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:30,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:30,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:30,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:30,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:30,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:30,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42467] to rsgroup master 2023-07-21 05:14:30,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:30,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 510 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40408 deadline: 1689917670743, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. 2023-07-21 05:14:30,743 WARN [Listener at localhost/34619] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:30,745 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:30,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:30,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:30,746 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:42093, jenkins-hbase4.apache.org:42315], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:30,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:30,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:30,764 INFO [Listener at localhost/34619] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=515 (was 515), OpenFileDescriptor=790 (was 790), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=478 (was 484), ProcessCount=174 (was 174), AvailableMemoryMB=3763 (was 3910) 2023-07-21 05:14:30,764 WARN [Listener at localhost/34619] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-21 05:14:30,781 INFO [Listener at localhost/34619] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=515, OpenFileDescriptor=790, MaxFileDescriptor=60000, SystemLoadAverage=478, ProcessCount=174, AvailableMemoryMB=3767 2023-07-21 05:14:30,781 WARN [Listener at localhost/34619] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-21 05:14:30,781 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-21 05:14:30,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:30,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:30,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:30,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:30,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:30,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:30,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:30,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:30,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:30,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:30,799 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:30,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:30,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:30,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:30,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:30,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:30,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:30,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42467] to rsgroup master 2023-07-21 05:14:30,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:30,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 538 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40408 deadline: 1689917670811, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. 2023-07-21 05:14:30,812 WARN [Listener at localhost/34619] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:30,813 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:30,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:30,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:30,815 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:42093, jenkins-hbase4.apache.org:42315], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:30,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:30,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:30,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:30,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:30,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-21 05:14:30,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 05:14:30,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:30,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:30,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:30,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:30,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:30,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:33541] to rsgroup oldGroup 2023-07-21 05:14:30,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 05:14:30,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:30,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:30,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 05:14:30,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33541,1689916455330, jenkins-hbase4.apache.org,40677,1689916451367] are moved back to default 2023-07-21 05:14:30,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-21 05:14:30,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:30,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:30,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:30,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-21 05:14:30,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:30,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-21 05:14:30,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:30,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:30,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:30,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-21 05:14:30,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 05:14:30,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 05:14:30,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:30,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 05:14:30,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:30,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:30,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:30,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42093] to rsgroup anotherRSGroup 2023-07-21 05:14:30,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 05:14:30,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 05:14:30,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:30,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 05:14:30,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 05:14:30,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42093,1689916451283] are moved back to default 2023-07-21 05:14:30,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-21 05:14:30,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:30,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:30,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:30,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-21 05:14:30,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:30,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-21 05:14:30,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:30,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-21 05:14:30,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:30,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:40408 deadline: 1689917670870, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-21 05:14:30,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-21 05:14:30,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:30,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:40408 deadline: 1689917670872, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-21 05:14:30,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-21 05:14:30,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:30,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:40408 deadline: 1689917670873, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-21 05:14:30,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-21 05:14:30,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:30,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 578 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:40408 deadline: 1689917670873, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-21 05:14:30,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:30,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:30,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:30,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:30,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:30,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42093] to rsgroup default 2023-07-21 05:14:30,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 05:14:30,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 05:14:30,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:30,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 05:14:30,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-21 05:14:30,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42093,1689916451283] are moved back to anotherRSGroup 2023-07-21 05:14:30,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-21 05:14:30,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:30,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-21 05:14:30,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 05:14:30,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:30,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 05:14:30,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:30,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:30,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:30,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:30,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:33541] to rsgroup default 2023-07-21 05:14:30,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 05:14:30,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:30,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:30,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-21 05:14:30,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33541,1689916455330, jenkins-hbase4.apache.org,40677,1689916451367] are moved back to oldGroup 2023-07-21 05:14:30,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-21 05:14:30,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:30,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-21 05:14:30,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:30,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 05:14:30,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:30,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:30,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:30,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:30,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:30,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:30,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:30,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:30,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:30,912 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:30,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:30,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:30,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:30,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:30,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:30,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:30,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42467] to rsgroup master 2023-07-21 05:14:30,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:30,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 614 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40408 deadline: 1689917670924, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. 2023-07-21 05:14:30,925 WARN [Listener at localhost/34619] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:30,926 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:30,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:30,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:30,927 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:42093, jenkins-hbase4.apache.org:42315], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:30,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:30,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:30,945 INFO [Listener at localhost/34619] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=519 (was 515) Potentially hanging thread: hconnection-0x78ef668c-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x78ef668c-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x78ef668c-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x78ef668c-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=790 (was 790), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=478 (was 478), ProcessCount=174 (was 174), AvailableMemoryMB=3778 (was 3767) - AvailableMemoryMB LEAK? - 2023-07-21 05:14:30,945 WARN [Listener at localhost/34619] hbase.ResourceChecker(130): Thread=519 is superior to 500 2023-07-21 05:14:30,961 INFO [Listener at localhost/34619] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=519, OpenFileDescriptor=790, MaxFileDescriptor=60000, SystemLoadAverage=478, ProcessCount=174, AvailableMemoryMB=3785 2023-07-21 05:14:30,961 WARN [Listener at localhost/34619] hbase.ResourceChecker(130): Thread=519 is superior to 500 2023-07-21 05:14:30,961 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-21 05:14:30,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:30,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:30,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:30,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:30,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:30,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:30,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:30,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:30,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:30,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:30,981 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:30,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:30,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:30,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:30,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:30,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:30,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:30,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:30,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42467] to rsgroup master 2023-07-21 05:14:30,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:30,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 642 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40408 deadline: 1689917670994, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. 2023-07-21 05:14:30,995 WARN [Listener at localhost/34619] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:30,997 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:30,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:30,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:30,998 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:42093, jenkins-hbase4.apache.org:42315], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:30,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:30,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:30,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:30,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:31,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-21 05:14:31,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 05:14:31,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:31,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:31,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:31,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:31,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:31,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:31,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:33541] to rsgroup oldgroup 2023-07-21 05:14:31,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 05:14:31,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:31,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:31,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:31,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 05:14:31,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33541,1689916455330, jenkins-hbase4.apache.org,40677,1689916451367] are moved back to default 2023-07-21 05:14:31,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-21 05:14:31,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:31,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:31,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:31,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-21 05:14:31,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:31,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:31,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-21 05:14:31,029 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:31,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-21 05:14:31,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-21 05:14:31,032 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 05:14:31,032 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:31,033 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:31,033 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:31,035 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 05:14:31,037 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:31,038 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c empty. 2023-07-21 05:14:31,038 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:31,038 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-21 05:14:31,058 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:31,059 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 55eed975c710f1801bb4aedb9ff16d4c, NAME => 'testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:31,076 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:31,076 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 55eed975c710f1801bb4aedb9ff16d4c, disabling compactions & flushes 2023-07-21 05:14:31,076 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:31,076 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:31,076 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. after waiting 0 ms 2023-07-21 05:14:31,076 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:31,076 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:31,076 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 55eed975c710f1801bb4aedb9ff16d4c: 2023-07-21 05:14:31,079 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 05:14:31,080 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689916471079"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916471079"}]},"ts":"1689916471079"} 2023-07-21 05:14:31,081 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 05:14:31,082 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 05:14:31,082 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916471082"}]},"ts":"1689916471082"} 2023-07-21 05:14:31,084 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-21 05:14:31,087 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:31,087 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:31,087 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:31,087 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:31,088 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=55eed975c710f1801bb4aedb9ff16d4c, ASSIGN}] 2023-07-21 05:14:31,090 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=55eed975c710f1801bb4aedb9ff16d4c, ASSIGN 2023-07-21 05:14:31,090 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=55eed975c710f1801bb4aedb9ff16d4c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42315,1689916451166; forceNewPlan=false, retain=false 2023-07-21 05:14:31,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-21 05:14:31,241 INFO [jenkins-hbase4:42467] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 05:14:31,242 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=55eed975c710f1801bb4aedb9ff16d4c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:31,242 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689916471242"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916471242"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916471242"}]},"ts":"1689916471242"} 2023-07-21 05:14:31,244 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure 55eed975c710f1801bb4aedb9ff16d4c, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:31,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-21 05:14:31,400 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:31,400 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 55eed975c710f1801bb4aedb9ff16d4c, NAME => 'testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:31,401 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:31,401 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:31,401 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:31,401 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:31,403 INFO [StoreOpener-55eed975c710f1801bb4aedb9ff16d4c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:31,404 DEBUG [StoreOpener-55eed975c710f1801bb4aedb9ff16d4c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c/tr 2023-07-21 05:14:31,404 DEBUG [StoreOpener-55eed975c710f1801bb4aedb9ff16d4c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c/tr 2023-07-21 05:14:31,405 INFO [StoreOpener-55eed975c710f1801bb4aedb9ff16d4c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 55eed975c710f1801bb4aedb9ff16d4c columnFamilyName tr 2023-07-21 05:14:31,405 INFO [StoreOpener-55eed975c710f1801bb4aedb9ff16d4c-1] regionserver.HStore(310): Store=55eed975c710f1801bb4aedb9ff16d4c/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:31,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:31,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:31,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:31,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:31,413 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 55eed975c710f1801bb4aedb9ff16d4c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11149048640, jitterRate=0.03833606839179993}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:31,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 55eed975c710f1801bb4aedb9ff16d4c: 2023-07-21 05:14:31,414 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c., pid=116, masterSystemTime=1689916471396 2023-07-21 05:14:31,416 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:31,416 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:31,416 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=55eed975c710f1801bb4aedb9ff16d4c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:31,417 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689916471416"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916471416"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916471416"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916471416"}]},"ts":"1689916471416"} 2023-07-21 05:14:31,421 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-21 05:14:31,421 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure 55eed975c710f1801bb4aedb9ff16d4c, server=jenkins-hbase4.apache.org,42315,1689916451166 in 174 msec 2023-07-21 05:14:31,423 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-21 05:14:31,423 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=55eed975c710f1801bb4aedb9ff16d4c, ASSIGN in 333 msec 2023-07-21 05:14:31,424 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 05:14:31,424 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916471424"}]},"ts":"1689916471424"} 2023-07-21 05:14:31,425 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-21 05:14:31,428 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 05:14:31,429 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 402 msec 2023-07-21 05:14:31,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-21 05:14:31,634 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-21 05:14:31,634 DEBUG [Listener at localhost/34619] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-21 05:14:31,634 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:31,637 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-21 05:14:31,637 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:31,637 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-21 05:14:31,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-21 05:14:31,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 05:14:31,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:31,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:31,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:31,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-21 05:14:31,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(345): Moving region 55eed975c710f1801bb4aedb9ff16d4c to RSGroup oldgroup 2023-07-21 05:14:31,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:31,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:31,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:31,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:31,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:31,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=55eed975c710f1801bb4aedb9ff16d4c, REOPEN/MOVE 2023-07-21 05:14:31,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-21 05:14:31,647 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=55eed975c710f1801bb4aedb9ff16d4c, REOPEN/MOVE 2023-07-21 05:14:31,648 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=55eed975c710f1801bb4aedb9ff16d4c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:31,648 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689916471648"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916471648"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916471648"}]},"ts":"1689916471648"} 2023-07-21 05:14:31,649 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure 55eed975c710f1801bb4aedb9ff16d4c, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:31,802 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:31,803 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 55eed975c710f1801bb4aedb9ff16d4c, disabling compactions & flushes 2023-07-21 05:14:31,803 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:31,803 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:31,803 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. after waiting 0 ms 2023-07-21 05:14:31,803 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:31,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:31,810 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:31,810 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 55eed975c710f1801bb4aedb9ff16d4c: 2023-07-21 05:14:31,810 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 55eed975c710f1801bb4aedb9ff16d4c move to jenkins-hbase4.apache.org,40677,1689916451367 record at close sequenceid=2 2023-07-21 05:14:31,811 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:31,811 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=55eed975c710f1801bb4aedb9ff16d4c, regionState=CLOSED 2023-07-21 05:14:31,812 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689916471811"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916471811"}]},"ts":"1689916471811"} 2023-07-21 05:14:31,814 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-21 05:14:31,814 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure 55eed975c710f1801bb4aedb9ff16d4c, server=jenkins-hbase4.apache.org,42315,1689916451166 in 164 msec 2023-07-21 05:14:31,815 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=55eed975c710f1801bb4aedb9ff16d4c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40677,1689916451367; forceNewPlan=false, retain=false 2023-07-21 05:14:31,965 INFO [jenkins-hbase4:42467] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 05:14:31,966 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=55eed975c710f1801bb4aedb9ff16d4c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:31,966 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689916471966"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916471966"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916471966"}]},"ts":"1689916471966"} 2023-07-21 05:14:31,970 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure 55eed975c710f1801bb4aedb9ff16d4c, server=jenkins-hbase4.apache.org,40677,1689916451367}] 2023-07-21 05:14:32,126 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:32,126 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 55eed975c710f1801bb4aedb9ff16d4c, NAME => 'testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:32,127 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:32,127 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:32,127 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:32,127 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:32,132 INFO [StoreOpener-55eed975c710f1801bb4aedb9ff16d4c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:32,134 DEBUG [StoreOpener-55eed975c710f1801bb4aedb9ff16d4c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c/tr 2023-07-21 05:14:32,134 DEBUG [StoreOpener-55eed975c710f1801bb4aedb9ff16d4c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c/tr 2023-07-21 05:14:32,134 INFO [StoreOpener-55eed975c710f1801bb4aedb9ff16d4c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 55eed975c710f1801bb4aedb9ff16d4c columnFamilyName tr 2023-07-21 05:14:32,135 INFO [StoreOpener-55eed975c710f1801bb4aedb9ff16d4c-1] regionserver.HStore(310): Store=55eed975c710f1801bb4aedb9ff16d4c/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:32,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:32,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:32,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:32,143 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 55eed975c710f1801bb4aedb9ff16d4c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9592635200, jitterRate=-0.10661622881889343}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:32,143 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 55eed975c710f1801bb4aedb9ff16d4c: 2023-07-21 05:14:32,144 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c., pid=119, masterSystemTime=1689916472122 2023-07-21 05:14:32,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:32,146 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:32,146 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=55eed975c710f1801bb4aedb9ff16d4c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:32,146 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689916472146"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916472146"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916472146"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916472146"}]},"ts":"1689916472146"} 2023-07-21 05:14:32,150 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-21 05:14:32,150 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure 55eed975c710f1801bb4aedb9ff16d4c, server=jenkins-hbase4.apache.org,40677,1689916451367 in 178 msec 2023-07-21 05:14:32,152 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=55eed975c710f1801bb4aedb9ff16d4c, REOPEN/MOVE in 504 msec 2023-07-21 05:14:32,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-21 05:14:32,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-21 05:14:32,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:32,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:32,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:32,654 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:32,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-21 05:14:32,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 05:14:32,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-21 05:14:32,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:32,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-21 05:14:32,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 05:14:32,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:32,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:32,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-21 05:14:32,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 05:14:32,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 05:14:32,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:32,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:32,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 05:14:32,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:32,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:32,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:32,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42093] to rsgroup normal 2023-07-21 05:14:32,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 05:14:32,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 05:14:32,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:32,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:32,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 05:14:32,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 05:14:32,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42093,1689916451283] are moved back to default 2023-07-21 05:14:32,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-21 05:14:32,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:32,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:32,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:32,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-21 05:14:32,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:32,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:32,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-21 05:14:32,689 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:32,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-21 05:14:32,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-21 05:14:32,692 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 05:14:32,692 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 05:14:32,693 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:32,693 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:32,694 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 05:14:32,696 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 05:14:32,698 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:32,698 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22 empty. 2023-07-21 05:14:32,699 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:32,699 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-21 05:14:32,718 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:32,723 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => c2dfaca75b68ed6d2ff1887a0a0f2c22, NAME => 'unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:32,743 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:32,743 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing c2dfaca75b68ed6d2ff1887a0a0f2c22, disabling compactions & flushes 2023-07-21 05:14:32,743 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:32,743 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:32,743 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. after waiting 0 ms 2023-07-21 05:14:32,743 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:32,743 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:32,743 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for c2dfaca75b68ed6d2ff1887a0a0f2c22: 2023-07-21 05:14:32,746 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 05:14:32,747 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689916472747"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916472747"}]},"ts":"1689916472747"} 2023-07-21 05:14:32,748 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 05:14:32,749 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 05:14:32,749 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916472749"}]},"ts":"1689916472749"} 2023-07-21 05:14:32,751 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-21 05:14:32,755 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=c2dfaca75b68ed6d2ff1887a0a0f2c22, ASSIGN}] 2023-07-21 05:14:32,758 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=c2dfaca75b68ed6d2ff1887a0a0f2c22, ASSIGN 2023-07-21 05:14:32,759 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=c2dfaca75b68ed6d2ff1887a0a0f2c22, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42315,1689916451166; forceNewPlan=false, retain=false 2023-07-21 05:14:32,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-21 05:14:32,910 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=c2dfaca75b68ed6d2ff1887a0a0f2c22, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:32,911 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689916472910"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916472910"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916472910"}]},"ts":"1689916472910"} 2023-07-21 05:14:32,913 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure c2dfaca75b68ed6d2ff1887a0a0f2c22, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:32,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-21 05:14:33,069 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:33,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c2dfaca75b68ed6d2ff1887a0a0f2c22, NAME => 'unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:33,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:33,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:33,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:33,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:33,071 INFO [StoreOpener-c2dfaca75b68ed6d2ff1887a0a0f2c22-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:33,072 DEBUG [StoreOpener-c2dfaca75b68ed6d2ff1887a0a0f2c22-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22/ut 2023-07-21 05:14:33,072 DEBUG [StoreOpener-c2dfaca75b68ed6d2ff1887a0a0f2c22-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22/ut 2023-07-21 05:14:33,073 INFO [StoreOpener-c2dfaca75b68ed6d2ff1887a0a0f2c22-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c2dfaca75b68ed6d2ff1887a0a0f2c22 columnFamilyName ut 2023-07-21 05:14:33,073 INFO [StoreOpener-c2dfaca75b68ed6d2ff1887a0a0f2c22-1] regionserver.HStore(310): Store=c2dfaca75b68ed6d2ff1887a0a0f2c22/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:33,074 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:33,074 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:33,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:33,081 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:33,081 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c2dfaca75b68ed6d2ff1887a0a0f2c22; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10366386720, jitterRate=-0.03455500304698944}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:33,081 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c2dfaca75b68ed6d2ff1887a0a0f2c22: 2023-07-21 05:14:33,084 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22., pid=122, masterSystemTime=1689916473064 2023-07-21 05:14:33,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:33,087 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:33,087 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=c2dfaca75b68ed6d2ff1887a0a0f2c22, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:33,087 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689916473087"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916473087"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916473087"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916473087"}]},"ts":"1689916473087"} 2023-07-21 05:14:33,098 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-21 05:14:33,098 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure c2dfaca75b68ed6d2ff1887a0a0f2c22, server=jenkins-hbase4.apache.org,42315,1689916451166 in 175 msec 2023-07-21 05:14:33,101 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-21 05:14:33,101 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=c2dfaca75b68ed6d2ff1887a0a0f2c22, ASSIGN in 343 msec 2023-07-21 05:14:33,102 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 05:14:33,102 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916473102"}]},"ts":"1689916473102"} 2023-07-21 05:14:33,105 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-21 05:14:33,108 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 05:14:33,112 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 423 msec 2023-07-21 05:14:33,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-21 05:14:33,294 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-21 05:14:33,295 DEBUG [Listener at localhost/34619] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-21 05:14:33,295 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:33,298 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-21 05:14:33,299 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:33,299 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-21 05:14:33,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-21 05:14:33,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 05:14:33,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 05:14:33,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:33,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:33,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 05:14:33,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-21 05:14:33,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(345): Moving region c2dfaca75b68ed6d2ff1887a0a0f2c22 to RSGroup normal 2023-07-21 05:14:33,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=c2dfaca75b68ed6d2ff1887a0a0f2c22, REOPEN/MOVE 2023-07-21 05:14:33,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-21 05:14:33,309 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=c2dfaca75b68ed6d2ff1887a0a0f2c22, REOPEN/MOVE 2023-07-21 05:14:33,309 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=c2dfaca75b68ed6d2ff1887a0a0f2c22, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:33,310 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689916473309"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916473309"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916473309"}]},"ts":"1689916473309"} 2023-07-21 05:14:33,311 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure c2dfaca75b68ed6d2ff1887a0a0f2c22, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:33,465 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:33,466 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c2dfaca75b68ed6d2ff1887a0a0f2c22, disabling compactions & flushes 2023-07-21 05:14:33,466 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:33,466 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:33,466 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. after waiting 0 ms 2023-07-21 05:14:33,466 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:33,470 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:33,470 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:33,471 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c2dfaca75b68ed6d2ff1887a0a0f2c22: 2023-07-21 05:14:33,471 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c2dfaca75b68ed6d2ff1887a0a0f2c22 move to jenkins-hbase4.apache.org,42093,1689916451283 record at close sequenceid=2 2023-07-21 05:14:33,472 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:33,473 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=c2dfaca75b68ed6d2ff1887a0a0f2c22, regionState=CLOSED 2023-07-21 05:14:33,473 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689916473473"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916473473"}]},"ts":"1689916473473"} 2023-07-21 05:14:33,476 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-21 05:14:33,476 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure c2dfaca75b68ed6d2ff1887a0a0f2c22, server=jenkins-hbase4.apache.org,42315,1689916451166 in 163 msec 2023-07-21 05:14:33,477 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=c2dfaca75b68ed6d2ff1887a0a0f2c22, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42093,1689916451283; forceNewPlan=false, retain=false 2023-07-21 05:14:33,627 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=c2dfaca75b68ed6d2ff1887a0a0f2c22, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:33,627 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689916473627"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916473627"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916473627"}]},"ts":"1689916473627"} 2023-07-21 05:14:33,629 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure c2dfaca75b68ed6d2ff1887a0a0f2c22, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:33,785 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:33,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c2dfaca75b68ed6d2ff1887a0a0f2c22, NAME => 'unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:33,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:33,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:33,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:33,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:33,787 INFO [StoreOpener-c2dfaca75b68ed6d2ff1887a0a0f2c22-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:33,788 DEBUG [StoreOpener-c2dfaca75b68ed6d2ff1887a0a0f2c22-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22/ut 2023-07-21 05:14:33,788 DEBUG [StoreOpener-c2dfaca75b68ed6d2ff1887a0a0f2c22-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22/ut 2023-07-21 05:14:33,788 INFO [StoreOpener-c2dfaca75b68ed6d2ff1887a0a0f2c22-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c2dfaca75b68ed6d2ff1887a0a0f2c22 columnFamilyName ut 2023-07-21 05:14:33,789 INFO [StoreOpener-c2dfaca75b68ed6d2ff1887a0a0f2c22-1] regionserver.HStore(310): Store=c2dfaca75b68ed6d2ff1887a0a0f2c22/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:33,790 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:33,791 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:33,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:33,795 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c2dfaca75b68ed6d2ff1887a0a0f2c22; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11297407680, jitterRate=0.052153080701828}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:33,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c2dfaca75b68ed6d2ff1887a0a0f2c22: 2023-07-21 05:14:33,796 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22., pid=125, masterSystemTime=1689916473781 2023-07-21 05:14:33,797 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:33,797 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:33,798 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=c2dfaca75b68ed6d2ff1887a0a0f2c22, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:33,798 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689916473798"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916473798"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916473798"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916473798"}]},"ts":"1689916473798"} 2023-07-21 05:14:33,800 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-21 05:14:33,801 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure c2dfaca75b68ed6d2ff1887a0a0f2c22, server=jenkins-hbase4.apache.org,42093,1689916451283 in 170 msec 2023-07-21 05:14:33,802 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=c2dfaca75b68ed6d2ff1887a0a0f2c22, REOPEN/MOVE in 493 msec 2023-07-21 05:14:34,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-21 05:14:34,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-21 05:14:34,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:34,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:34,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:34,316 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:34,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 05:14:34,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 05:14:34,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-21 05:14:34,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:34,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 05:14:34,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 05:14:34,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-21 05:14:34,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 05:14:34,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:34,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:34,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 05:14:34,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-21 05:14:34,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-21 05:14:34,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:34,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:34,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-21 05:14:34,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:34,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-21 05:14:34,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 05:14:34,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 05:14:34,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 05:14:34,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:34,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:34,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-21 05:14:34,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 05:14:34,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:34,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:34,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 05:14:34,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 05:14:34,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-21 05:14:34,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(345): Moving region c2dfaca75b68ed6d2ff1887a0a0f2c22 to RSGroup default 2023-07-21 05:14:34,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=c2dfaca75b68ed6d2ff1887a0a0f2c22, REOPEN/MOVE 2023-07-21 05:14:34,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 05:14:34,347 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=c2dfaca75b68ed6d2ff1887a0a0f2c22, REOPEN/MOVE 2023-07-21 05:14:34,347 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=c2dfaca75b68ed6d2ff1887a0a0f2c22, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:34,347 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689916474347"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916474347"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916474347"}]},"ts":"1689916474347"} 2023-07-21 05:14:34,348 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure c2dfaca75b68ed6d2ff1887a0a0f2c22, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:34,501 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:34,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c2dfaca75b68ed6d2ff1887a0a0f2c22, disabling compactions & flushes 2023-07-21 05:14:34,503 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:34,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:34,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. after waiting 0 ms 2023-07-21 05:14:34,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:34,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 05:14:34,508 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:34,508 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c2dfaca75b68ed6d2ff1887a0a0f2c22: 2023-07-21 05:14:34,508 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c2dfaca75b68ed6d2ff1887a0a0f2c22 move to jenkins-hbase4.apache.org,42315,1689916451166 record at close sequenceid=5 2023-07-21 05:14:34,509 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:34,510 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=c2dfaca75b68ed6d2ff1887a0a0f2c22, regionState=CLOSED 2023-07-21 05:14:34,510 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689916474510"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916474510"}]},"ts":"1689916474510"} 2023-07-21 05:14:34,512 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-21 05:14:34,513 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure c2dfaca75b68ed6d2ff1887a0a0f2c22, server=jenkins-hbase4.apache.org,42093,1689916451283 in 163 msec 2023-07-21 05:14:34,513 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=c2dfaca75b68ed6d2ff1887a0a0f2c22, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42315,1689916451166; forceNewPlan=false, retain=false 2023-07-21 05:14:34,664 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=c2dfaca75b68ed6d2ff1887a0a0f2c22, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:34,664 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689916474663"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916474663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916474663"}]},"ts":"1689916474663"} 2023-07-21 05:14:34,665 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure c2dfaca75b68ed6d2ff1887a0a0f2c22, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:34,821 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:34,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c2dfaca75b68ed6d2ff1887a0a0f2c22, NAME => 'unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:34,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:34,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:34,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:34,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:34,824 INFO [StoreOpener-c2dfaca75b68ed6d2ff1887a0a0f2c22-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:34,825 DEBUG [StoreOpener-c2dfaca75b68ed6d2ff1887a0a0f2c22-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22/ut 2023-07-21 05:14:34,825 DEBUG [StoreOpener-c2dfaca75b68ed6d2ff1887a0a0f2c22-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22/ut 2023-07-21 05:14:34,826 INFO [StoreOpener-c2dfaca75b68ed6d2ff1887a0a0f2c22-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c2dfaca75b68ed6d2ff1887a0a0f2c22 columnFamilyName ut 2023-07-21 05:14:34,826 INFO [StoreOpener-c2dfaca75b68ed6d2ff1887a0a0f2c22-1] regionserver.HStore(310): Store=c2dfaca75b68ed6d2ff1887a0a0f2c22/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:34,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:34,829 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:34,833 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 05:14:34,833 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:34,835 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c2dfaca75b68ed6d2ff1887a0a0f2c22; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11700874240, jitterRate=0.08972883224487305}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:34,835 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c2dfaca75b68ed6d2ff1887a0a0f2c22: 2023-07-21 05:14:34,836 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22., pid=128, masterSystemTime=1689916474817 2023-07-21 05:14:34,837 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:34,837 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:34,838 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=c2dfaca75b68ed6d2ff1887a0a0f2c22, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:34,838 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689916474838"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916474838"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916474838"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916474838"}]},"ts":"1689916474838"} 2023-07-21 05:14:34,841 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-21 05:14:34,841 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure c2dfaca75b68ed6d2ff1887a0a0f2c22, server=jenkins-hbase4.apache.org,42315,1689916451166 in 174 msec 2023-07-21 05:14:34,843 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=c2dfaca75b68ed6d2ff1887a0a0f2c22, REOPEN/MOVE in 495 msec 2023-07-21 05:14:35,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-21 05:14:35,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-21 05:14:35,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:35,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42093] to rsgroup default 2023-07-21 05:14:35,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 05:14:35,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:35,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:35,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 05:14:35,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 05:14:35,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-21 05:14:35,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42093,1689916451283] are moved back to normal 2023-07-21 05:14:35,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-21 05:14:35,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:35,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-21 05:14:35,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:35,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:35,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 05:14:35,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 05:14:35,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:35,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:35,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:35,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:35,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:35,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:35,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:35,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:35,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 05:14:35,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 05:14:35,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:35,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-21 05:14:35,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:35,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 05:14:35,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:35,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-21 05:14:35,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(345): Moving region 55eed975c710f1801bb4aedb9ff16d4c to RSGroup default 2023-07-21 05:14:35,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=55eed975c710f1801bb4aedb9ff16d4c, REOPEN/MOVE 2023-07-21 05:14:35,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 05:14:35,376 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=55eed975c710f1801bb4aedb9ff16d4c, REOPEN/MOVE 2023-07-21 05:14:35,377 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=55eed975c710f1801bb4aedb9ff16d4c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:35,377 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689916475377"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916475377"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916475377"}]},"ts":"1689916475377"} 2023-07-21 05:14:35,378 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 55eed975c710f1801bb4aedb9ff16d4c, server=jenkins-hbase4.apache.org,40677,1689916451367}] 2023-07-21 05:14:35,531 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:35,532 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 55eed975c710f1801bb4aedb9ff16d4c, disabling compactions & flushes 2023-07-21 05:14:35,532 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:35,532 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:35,532 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. after waiting 0 ms 2023-07-21 05:14:35,532 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:35,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 05:14:35,538 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:35,538 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 55eed975c710f1801bb4aedb9ff16d4c: 2023-07-21 05:14:35,538 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 55eed975c710f1801bb4aedb9ff16d4c move to jenkins-hbase4.apache.org,42093,1689916451283 record at close sequenceid=5 2023-07-21 05:14:35,540 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:35,540 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=55eed975c710f1801bb4aedb9ff16d4c, regionState=CLOSED 2023-07-21 05:14:35,540 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689916475540"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916475540"}]},"ts":"1689916475540"} 2023-07-21 05:14:35,543 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-21 05:14:35,543 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 55eed975c710f1801bb4aedb9ff16d4c, server=jenkins-hbase4.apache.org,40677,1689916451367 in 164 msec 2023-07-21 05:14:35,543 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=55eed975c710f1801bb4aedb9ff16d4c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42093,1689916451283; forceNewPlan=false, retain=false 2023-07-21 05:14:35,694 INFO [jenkins-hbase4:42467] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 05:14:35,694 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=55eed975c710f1801bb4aedb9ff16d4c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:35,694 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689916475694"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916475694"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916475694"}]},"ts":"1689916475694"} 2023-07-21 05:14:35,696 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 55eed975c710f1801bb4aedb9ff16d4c, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:35,851 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:35,851 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 55eed975c710f1801bb4aedb9ff16d4c, NAME => 'testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:35,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:35,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:35,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:35,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:35,853 INFO [StoreOpener-55eed975c710f1801bb4aedb9ff16d4c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:35,854 DEBUG [StoreOpener-55eed975c710f1801bb4aedb9ff16d4c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c/tr 2023-07-21 05:14:35,854 DEBUG [StoreOpener-55eed975c710f1801bb4aedb9ff16d4c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c/tr 2023-07-21 05:14:35,855 INFO [StoreOpener-55eed975c710f1801bb4aedb9ff16d4c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 55eed975c710f1801bb4aedb9ff16d4c columnFamilyName tr 2023-07-21 05:14:35,855 INFO [StoreOpener-55eed975c710f1801bb4aedb9ff16d4c-1] regionserver.HStore(310): Store=55eed975c710f1801bb4aedb9ff16d4c/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:35,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:35,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:35,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:35,861 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 55eed975c710f1801bb4aedb9ff16d4c; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9885837600, jitterRate=-0.0793096274137497}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:35,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 55eed975c710f1801bb4aedb9ff16d4c: 2023-07-21 05:14:35,862 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c., pid=131, masterSystemTime=1689916475847 2023-07-21 05:14:35,864 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:35,864 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:35,864 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=55eed975c710f1801bb4aedb9ff16d4c, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:35,864 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689916475864"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916475864"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916475864"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916475864"}]},"ts":"1689916475864"} 2023-07-21 05:14:35,867 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-21 05:14:35,867 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 55eed975c710f1801bb4aedb9ff16d4c, server=jenkins-hbase4.apache.org,42093,1689916451283 in 170 msec 2023-07-21 05:14:35,868 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=55eed975c710f1801bb4aedb9ff16d4c, REOPEN/MOVE in 492 msec 2023-07-21 05:14:36,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-21 05:14:36,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-21 05:14:36,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:36,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:33541] to rsgroup default 2023-07-21 05:14:36,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:36,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 05:14:36,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:36,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-21 05:14:36,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33541,1689916455330, jenkins-hbase4.apache.org,40677,1689916451367] are moved back to newgroup 2023-07-21 05:14:36,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-21 05:14:36,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:36,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-21 05:14:36,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:36,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:36,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:36,391 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:36,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:36,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:36,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:36,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:36,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:36,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:36,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:36,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42467] to rsgroup master 2023-07-21 05:14:36,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:36,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 762 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40408 deadline: 1689917676406, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. 2023-07-21 05:14:36,407 WARN [Listener at localhost/34619] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:36,409 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:36,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:36,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:36,410 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:42093, jenkins-hbase4.apache.org:42315], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:36,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:36,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:36,428 INFO [Listener at localhost/34619] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=511 (was 519), OpenFileDescriptor=775 (was 790), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=438 (was 478), ProcessCount=174 (was 174), AvailableMemoryMB=3727 (was 3785) 2023-07-21 05:14:36,428 WARN [Listener at localhost/34619] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-21 05:14:36,444 INFO [Listener at localhost/34619] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=511, OpenFileDescriptor=775, MaxFileDescriptor=60000, SystemLoadAverage=438, ProcessCount=174, AvailableMemoryMB=3727 2023-07-21 05:14:36,444 WARN [Listener at localhost/34619] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-21 05:14:36,444 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-21 05:14:36,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:36,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:36,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:36,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:36,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:36,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:36,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:36,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:36,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:36,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:36,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:36,457 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:36,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:36,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:36,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:36,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:36,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:36,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:36,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:36,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42467] to rsgroup master 2023-07-21 05:14:36,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:36,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 790 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40408 deadline: 1689917676466, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. 2023-07-21 05:14:36,467 WARN [Listener at localhost/34619] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:36,468 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:36,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:36,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:36,469 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:42093, jenkins-hbase4.apache.org:42315], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:36,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:36,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:36,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-21 05:14:36,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 05:14:36,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-21 05:14:36,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-21 05:14:36,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-21 05:14:36,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:36,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-21 05:14:36,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:36,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 802 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:40408 deadline: 1689917676477, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-21 05:14:36,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-21 05:14:36,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:36,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 805 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:40408 deadline: 1689917676480, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-21 05:14:36,483 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-21 05:14:36,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-21 05:14:36,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-21 05:14:36,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:36,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 809 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:40408 deadline: 1689917676487, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-21 05:14:36,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:36,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:36,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:36,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:36,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:36,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:36,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:36,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:36,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:36,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:36,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:36,500 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:36,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:36,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:36,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:36,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:36,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:36,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:36,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:36,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42467] to rsgroup master 2023-07-21 05:14:36,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:36,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 833 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40408 deadline: 1689917676509, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. 2023-07-21 05:14:36,513 WARN [Listener at localhost/34619] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:36,514 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:36,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:36,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:36,515 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:42093, jenkins-hbase4.apache.org:42315], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:36,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:36,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:36,532 INFO [Listener at localhost/34619] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=515 (was 511) Potentially hanging thread: hconnection-0x78ef668c-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f3aeb7a-shared-pool-30 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f3aeb7a-shared-pool-29 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x78ef668c-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=775 (was 775), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=438 (was 438), ProcessCount=174 (was 174), AvailableMemoryMB=3727 (was 3727) 2023-07-21 05:14:36,532 WARN [Listener at localhost/34619] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-21 05:14:36,548 INFO [Listener at localhost/34619] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=515, OpenFileDescriptor=775, MaxFileDescriptor=60000, SystemLoadAverage=438, ProcessCount=174, AvailableMemoryMB=3726 2023-07-21 05:14:36,548 WARN [Listener at localhost/34619] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-21 05:14:36,549 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-21 05:14:36,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:36,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:36,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:36,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:36,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:36,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:36,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:36,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:36,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:36,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:36,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:36,562 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:36,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:36,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:36,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:36,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:36,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:36,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:36,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:36,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42467] to rsgroup master 2023-07-21 05:14:36,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:36,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 861 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40408 deadline: 1689917676572, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. 2023-07-21 05:14:36,572 WARN [Listener at localhost/34619] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:36,574 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:36,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:36,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:36,575 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:42093, jenkins-hbase4.apache.org:42315], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:36,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:36,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:36,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:36,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:36,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_867440834 2023-07-21 05:14:36,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_867440834 2023-07-21 05:14:36,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:36,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:36,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:36,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:36,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:36,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:36,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:33541] to rsgroup Group_testDisabledTableMove_867440834 2023-07-21 05:14:36,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:36,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_867440834 2023-07-21 05:14:36,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:36,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:36,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 05:14:36,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33541,1689916455330, jenkins-hbase4.apache.org,40677,1689916451367] are moved back to default 2023-07-21 05:14:36,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_867440834 2023-07-21 05:14:36,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:36,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:36,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:36,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_867440834 2023-07-21 05:14:36,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:36,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:36,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-21 05:14:36,604 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:36,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-21 05:14:36,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-21 05:14:36,606 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:36,606 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_867440834 2023-07-21 05:14:36,606 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:36,607 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:36,608 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 05:14:36,612 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/eaf8a3af0c5f09e87d3e6300cee4017c 2023-07-21 05:14:36,612 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/02496be51c62e7cee197c11d7f18573d 2023-07-21 05:14:36,612 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/7c63ea6a49ddb24ae12dd463f9ba3188 2023-07-21 05:14:36,612 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/55024b106350040b0ff76c2bfeb2a1c6 2023-07-21 05:14:36,612 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/a981c509f844b0d1fabc9102609dd6d3 2023-07-21 05:14:36,612 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/02496be51c62e7cee197c11d7f18573d empty. 2023-07-21 05:14:36,612 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/55024b106350040b0ff76c2bfeb2a1c6 empty. 2023-07-21 05:14:36,612 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/7c63ea6a49ddb24ae12dd463f9ba3188 empty. 2023-07-21 05:14:36,612 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/eaf8a3af0c5f09e87d3e6300cee4017c empty. 2023-07-21 05:14:36,612 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/a981c509f844b0d1fabc9102609dd6d3 empty. 2023-07-21 05:14:36,613 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/02496be51c62e7cee197c11d7f18573d 2023-07-21 05:14:36,613 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/7c63ea6a49ddb24ae12dd463f9ba3188 2023-07-21 05:14:36,613 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/55024b106350040b0ff76c2bfeb2a1c6 2023-07-21 05:14:36,613 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/eaf8a3af0c5f09e87d3e6300cee4017c 2023-07-21 05:14:36,613 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/a981c509f844b0d1fabc9102609dd6d3 2023-07-21 05:14:36,613 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-21 05:14:36,626 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:36,628 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 02496be51c62e7cee197c11d7f18573d, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:36,628 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => a981c509f844b0d1fabc9102609dd6d3, NAME => 'Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:36,628 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => eaf8a3af0c5f09e87d3e6300cee4017c, NAME => 'Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:36,658 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:36,659 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing eaf8a3af0c5f09e87d3e6300cee4017c, disabling compactions & flushes 2023-07-21 05:14:36,659 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c. 2023-07-21 05:14:36,659 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c. 2023-07-21 05:14:36,659 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c. after waiting 0 ms 2023-07-21 05:14:36,659 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c. 2023-07-21 05:14:36,659 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c. 2023-07-21 05:14:36,659 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for eaf8a3af0c5f09e87d3e6300cee4017c: 2023-07-21 05:14:36,659 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:36,659 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 02496be51c62e7cee197c11d7f18573d, disabling compactions & flushes 2023-07-21 05:14:36,659 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 55024b106350040b0ff76c2bfeb2a1c6, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:36,659 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d. 2023-07-21 05:14:36,659 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d. 2023-07-21 05:14:36,659 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d. after waiting 0 ms 2023-07-21 05:14:36,659 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d. 2023-07-21 05:14:36,659 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d. 2023-07-21 05:14:36,660 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 02496be51c62e7cee197c11d7f18573d: 2023-07-21 05:14:36,660 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 7c63ea6a49ddb24ae12dd463f9ba3188, NAME => 'Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp 2023-07-21 05:14:36,672 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:36,672 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 55024b106350040b0ff76c2bfeb2a1c6, disabling compactions & flushes 2023-07-21 05:14:36,672 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6. 2023-07-21 05:14:36,672 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6. 2023-07-21 05:14:36,672 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6. after waiting 0 ms 2023-07-21 05:14:36,672 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6. 2023-07-21 05:14:36,672 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6. 2023-07-21 05:14:36,673 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 55024b106350040b0ff76c2bfeb2a1c6: 2023-07-21 05:14:36,674 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:36,674 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 7c63ea6a49ddb24ae12dd463f9ba3188, disabling compactions & flushes 2023-07-21 05:14:36,674 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188. 2023-07-21 05:14:36,674 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188. 2023-07-21 05:14:36,674 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188. after waiting 0 ms 2023-07-21 05:14:36,674 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188. 2023-07-21 05:14:36,674 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188. 2023-07-21 05:14:36,674 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 7c63ea6a49ddb24ae12dd463f9ba3188: 2023-07-21 05:14:36,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-21 05:14:36,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-21 05:14:37,059 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:37,060 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing a981c509f844b0d1fabc9102609dd6d3, disabling compactions & flushes 2023-07-21 05:14:37,060 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3. 2023-07-21 05:14:37,060 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3. 2023-07-21 05:14:37,060 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3. after waiting 0 ms 2023-07-21 05:14:37,060 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3. 2023-07-21 05:14:37,060 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3. 2023-07-21 05:14:37,060 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for a981c509f844b0d1fabc9102609dd6d3: 2023-07-21 05:14:37,062 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 05:14:37,063 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689916477063"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916477063"}]},"ts":"1689916477063"} 2023-07-21 05:14:37,064 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689916477063"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916477063"}]},"ts":"1689916477063"} 2023-07-21 05:14:37,064 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689916477063"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916477063"}]},"ts":"1689916477063"} 2023-07-21 05:14:37,064 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689916477063"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916477063"}]},"ts":"1689916477063"} 2023-07-21 05:14:37,064 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689916477063"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916477063"}]},"ts":"1689916477063"} 2023-07-21 05:14:37,066 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 05:14:37,067 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 05:14:37,067 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916477067"}]},"ts":"1689916477067"} 2023-07-21 05:14:37,068 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-21 05:14:37,073 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:37,073 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:37,073 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:37,073 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:37,073 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=eaf8a3af0c5f09e87d3e6300cee4017c, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a981c509f844b0d1fabc9102609dd6d3, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=02496be51c62e7cee197c11d7f18573d, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=55024b106350040b0ff76c2bfeb2a1c6, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c63ea6a49ddb24ae12dd463f9ba3188, ASSIGN}] 2023-07-21 05:14:37,076 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=02496be51c62e7cee197c11d7f18573d, ASSIGN 2023-07-21 05:14:37,076 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c63ea6a49ddb24ae12dd463f9ba3188, ASSIGN 2023-07-21 05:14:37,076 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=55024b106350040b0ff76c2bfeb2a1c6, ASSIGN 2023-07-21 05:14:37,076 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a981c509f844b0d1fabc9102609dd6d3, ASSIGN 2023-07-21 05:14:37,078 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=eaf8a3af0c5f09e87d3e6300cee4017c, ASSIGN 2023-07-21 05:14:37,078 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c63ea6a49ddb24ae12dd463f9ba3188, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42315,1689916451166; forceNewPlan=false, retain=false 2023-07-21 05:14:37,078 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=02496be51c62e7cee197c11d7f18573d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42315,1689916451166; forceNewPlan=false, retain=false 2023-07-21 05:14:37,079 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=55024b106350040b0ff76c2bfeb2a1c6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42093,1689916451283; forceNewPlan=false, retain=false 2023-07-21 05:14:37,079 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a981c509f844b0d1fabc9102609dd6d3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42093,1689916451283; forceNewPlan=false, retain=false 2023-07-21 05:14:37,079 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=eaf8a3af0c5f09e87d3e6300cee4017c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42093,1689916451283; forceNewPlan=false, retain=false 2023-07-21 05:14:37,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-21 05:14:37,229 INFO [jenkins-hbase4:42467] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 05:14:37,232 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=a981c509f844b0d1fabc9102609dd6d3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:37,232 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=eaf8a3af0c5f09e87d3e6300cee4017c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:37,233 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689916477232"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916477232"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916477232"}]},"ts":"1689916477232"} 2023-07-21 05:14:37,232 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=55024b106350040b0ff76c2bfeb2a1c6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:37,232 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=7c63ea6a49ddb24ae12dd463f9ba3188, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:37,233 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689916477232"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916477232"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916477232"}]},"ts":"1689916477232"} 2023-07-21 05:14:37,233 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689916477232"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916477232"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916477232"}]},"ts":"1689916477232"} 2023-07-21 05:14:37,232 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=02496be51c62e7cee197c11d7f18573d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:37,233 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689916477232"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916477232"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916477232"}]},"ts":"1689916477232"} 2023-07-21 05:14:37,233 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689916477232"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916477232"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916477232"}]},"ts":"1689916477232"} 2023-07-21 05:14:37,234 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=134, state=RUNNABLE; OpenRegionProcedure a981c509f844b0d1fabc9102609dd6d3, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:37,235 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=136, state=RUNNABLE; OpenRegionProcedure 55024b106350040b0ff76c2bfeb2a1c6, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:37,235 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=137, state=RUNNABLE; OpenRegionProcedure 7c63ea6a49ddb24ae12dd463f9ba3188, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:37,236 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=133, state=RUNNABLE; OpenRegionProcedure eaf8a3af0c5f09e87d3e6300cee4017c, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:37,238 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=135, state=RUNNABLE; OpenRegionProcedure 02496be51c62e7cee197c11d7f18573d, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:37,300 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-21 05:14:37,302 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-21 05:14:37,390 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6. 2023-07-21 05:14:37,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 55024b106350040b0ff76c2bfeb2a1c6, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 05:14:37,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 55024b106350040b0ff76c2bfeb2a1c6 2023-07-21 05:14:37,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:37,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 55024b106350040b0ff76c2bfeb2a1c6 2023-07-21 05:14:37,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 55024b106350040b0ff76c2bfeb2a1c6 2023-07-21 05:14:37,392 INFO [StoreOpener-55024b106350040b0ff76c2bfeb2a1c6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 55024b106350040b0ff76c2bfeb2a1c6 2023-07-21 05:14:37,394 DEBUG [StoreOpener-55024b106350040b0ff76c2bfeb2a1c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/55024b106350040b0ff76c2bfeb2a1c6/f 2023-07-21 05:14:37,394 DEBUG [StoreOpener-55024b106350040b0ff76c2bfeb2a1c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/55024b106350040b0ff76c2bfeb2a1c6/f 2023-07-21 05:14:37,394 INFO [StoreOpener-55024b106350040b0ff76c2bfeb2a1c6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 55024b106350040b0ff76c2bfeb2a1c6 columnFamilyName f 2023-07-21 05:14:37,395 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188. 2023-07-21 05:14:37,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7c63ea6a49ddb24ae12dd463f9ba3188, NAME => 'Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 05:14:37,395 INFO [StoreOpener-55024b106350040b0ff76c2bfeb2a1c6-1] regionserver.HStore(310): Store=55024b106350040b0ff76c2bfeb2a1c6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:37,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 7c63ea6a49ddb24ae12dd463f9ba3188 2023-07-21 05:14:37,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:37,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7c63ea6a49ddb24ae12dd463f9ba3188 2023-07-21 05:14:37,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7c63ea6a49ddb24ae12dd463f9ba3188 2023-07-21 05:14:37,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/55024b106350040b0ff76c2bfeb2a1c6 2023-07-21 05:14:37,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/55024b106350040b0ff76c2bfeb2a1c6 2023-07-21 05:14:37,397 INFO [StoreOpener-7c63ea6a49ddb24ae12dd463f9ba3188-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7c63ea6a49ddb24ae12dd463f9ba3188 2023-07-21 05:14:37,398 DEBUG [StoreOpener-7c63ea6a49ddb24ae12dd463f9ba3188-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/7c63ea6a49ddb24ae12dd463f9ba3188/f 2023-07-21 05:14:37,398 DEBUG [StoreOpener-7c63ea6a49ddb24ae12dd463f9ba3188-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/7c63ea6a49ddb24ae12dd463f9ba3188/f 2023-07-21 05:14:37,398 INFO [StoreOpener-7c63ea6a49ddb24ae12dd463f9ba3188-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7c63ea6a49ddb24ae12dd463f9ba3188 columnFamilyName f 2023-07-21 05:14:37,399 INFO [StoreOpener-7c63ea6a49ddb24ae12dd463f9ba3188-1] regionserver.HStore(310): Store=7c63ea6a49ddb24ae12dd463f9ba3188/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:37,399 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 55024b106350040b0ff76c2bfeb2a1c6 2023-07-21 05:14:37,400 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/7c63ea6a49ddb24ae12dd463f9ba3188 2023-07-21 05:14:37,400 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/7c63ea6a49ddb24ae12dd463f9ba3188 2023-07-21 05:14:37,401 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/55024b106350040b0ff76c2bfeb2a1c6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:37,402 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 55024b106350040b0ff76c2bfeb2a1c6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10057390240, jitterRate=-0.06333254277706146}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:37,402 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 55024b106350040b0ff76c2bfeb2a1c6: 2023-07-21 05:14:37,402 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6., pid=139, masterSystemTime=1689916477386 2023-07-21 05:14:37,403 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7c63ea6a49ddb24ae12dd463f9ba3188 2023-07-21 05:14:37,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6. 2023-07-21 05:14:37,404 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6. 2023-07-21 05:14:37,404 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c. 2023-07-21 05:14:37,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eaf8a3af0c5f09e87d3e6300cee4017c, NAME => 'Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 05:14:37,404 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=55024b106350040b0ff76c2bfeb2a1c6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:37,405 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689916477404"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916477404"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916477404"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916477404"}]},"ts":"1689916477404"} 2023-07-21 05:14:37,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove eaf8a3af0c5f09e87d3e6300cee4017c 2023-07-21 05:14:37,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:37,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for eaf8a3af0c5f09e87d3e6300cee4017c 2023-07-21 05:14:37,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for eaf8a3af0c5f09e87d3e6300cee4017c 2023-07-21 05:14:37,407 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/7c63ea6a49ddb24ae12dd463f9ba3188/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:37,407 INFO [StoreOpener-eaf8a3af0c5f09e87d3e6300cee4017c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region eaf8a3af0c5f09e87d3e6300cee4017c 2023-07-21 05:14:37,408 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7c63ea6a49ddb24ae12dd463f9ba3188; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11512900960, jitterRate=0.07222245633602142}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:37,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7c63ea6a49ddb24ae12dd463f9ba3188: 2023-07-21 05:14:37,408 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188., pid=140, masterSystemTime=1689916477391 2023-07-21 05:14:37,409 DEBUG [StoreOpener-eaf8a3af0c5f09e87d3e6300cee4017c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/eaf8a3af0c5f09e87d3e6300cee4017c/f 2023-07-21 05:14:37,409 DEBUG [StoreOpener-eaf8a3af0c5f09e87d3e6300cee4017c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/eaf8a3af0c5f09e87d3e6300cee4017c/f 2023-07-21 05:14:37,409 INFO [StoreOpener-eaf8a3af0c5f09e87d3e6300cee4017c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eaf8a3af0c5f09e87d3e6300cee4017c columnFamilyName f 2023-07-21 05:14:37,410 INFO [StoreOpener-eaf8a3af0c5f09e87d3e6300cee4017c-1] regionserver.HStore(310): Store=eaf8a3af0c5f09e87d3e6300cee4017c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:37,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188. 2023-07-21 05:14:37,411 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188. 2023-07-21 05:14:37,411 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d. 2023-07-21 05:14:37,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 02496be51c62e7cee197c11d7f18573d, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 05:14:37,411 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=7c63ea6a49ddb24ae12dd463f9ba3188, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:37,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 02496be51c62e7cee197c11d7f18573d 2023-07-21 05:14:37,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:37,411 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689916477411"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916477411"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916477411"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916477411"}]},"ts":"1689916477411"} 2023-07-21 05:14:37,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 02496be51c62e7cee197c11d7f18573d 2023-07-21 05:14:37,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 02496be51c62e7cee197c11d7f18573d 2023-07-21 05:14:37,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/eaf8a3af0c5f09e87d3e6300cee4017c 2023-07-21 05:14:37,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/eaf8a3af0c5f09e87d3e6300cee4017c 2023-07-21 05:14:37,413 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=136 2023-07-21 05:14:37,413 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=136, state=SUCCESS; OpenRegionProcedure 55024b106350040b0ff76c2bfeb2a1c6, server=jenkins-hbase4.apache.org,42093,1689916451283 in 172 msec 2023-07-21 05:14:37,413 INFO [StoreOpener-02496be51c62e7cee197c11d7f18573d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 02496be51c62e7cee197c11d7f18573d 2023-07-21 05:14:37,415 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=55024b106350040b0ff76c2bfeb2a1c6, ASSIGN in 340 msec 2023-07-21 05:14:37,415 DEBUG [StoreOpener-02496be51c62e7cee197c11d7f18573d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/02496be51c62e7cee197c11d7f18573d/f 2023-07-21 05:14:37,416 DEBUG [StoreOpener-02496be51c62e7cee197c11d7f18573d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/02496be51c62e7cee197c11d7f18573d/f 2023-07-21 05:14:37,416 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=137 2023-07-21 05:14:37,416 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=137, state=SUCCESS; OpenRegionProcedure 7c63ea6a49ddb24ae12dd463f9ba3188, server=jenkins-hbase4.apache.org,42315,1689916451166 in 178 msec 2023-07-21 05:14:37,416 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for eaf8a3af0c5f09e87d3e6300cee4017c 2023-07-21 05:14:37,416 INFO [StoreOpener-02496be51c62e7cee197c11d7f18573d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 02496be51c62e7cee197c11d7f18573d columnFamilyName f 2023-07-21 05:14:37,417 INFO [StoreOpener-02496be51c62e7cee197c11d7f18573d-1] regionserver.HStore(310): Store=02496be51c62e7cee197c11d7f18573d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:37,417 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c63ea6a49ddb24ae12dd463f9ba3188, ASSIGN in 343 msec 2023-07-21 05:14:37,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/02496be51c62e7cee197c11d7f18573d 2023-07-21 05:14:37,418 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/02496be51c62e7cee197c11d7f18573d 2023-07-21 05:14:37,418 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/eaf8a3af0c5f09e87d3e6300cee4017c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:37,419 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened eaf8a3af0c5f09e87d3e6300cee4017c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11651801120, jitterRate=0.08515854179859161}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:37,419 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for eaf8a3af0c5f09e87d3e6300cee4017c: 2023-07-21 05:14:37,419 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c., pid=141, masterSystemTime=1689916477386 2023-07-21 05:14:37,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c. 2023-07-21 05:14:37,421 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c. 2023-07-21 05:14:37,421 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3. 2023-07-21 05:14:37,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 02496be51c62e7cee197c11d7f18573d 2023-07-21 05:14:37,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a981c509f844b0d1fabc9102609dd6d3, NAME => 'Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 05:14:37,421 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=eaf8a3af0c5f09e87d3e6300cee4017c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:37,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove a981c509f844b0d1fabc9102609dd6d3 2023-07-21 05:14:37,421 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689916477421"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916477421"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916477421"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916477421"}]},"ts":"1689916477421"} 2023-07-21 05:14:37,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:37,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a981c509f844b0d1fabc9102609dd6d3 2023-07-21 05:14:37,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a981c509f844b0d1fabc9102609dd6d3 2023-07-21 05:14:37,423 INFO [StoreOpener-a981c509f844b0d1fabc9102609dd6d3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a981c509f844b0d1fabc9102609dd6d3 2023-07-21 05:14:37,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/02496be51c62e7cee197c11d7f18573d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:37,424 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 02496be51c62e7cee197c11d7f18573d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11361681280, jitterRate=0.058139026165008545}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:37,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 02496be51c62e7cee197c11d7f18573d: 2023-07-21 05:14:37,424 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=133 2023-07-21 05:14:37,424 DEBUG [StoreOpener-a981c509f844b0d1fabc9102609dd6d3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/a981c509f844b0d1fabc9102609dd6d3/f 2023-07-21 05:14:37,425 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=133, state=SUCCESS; OpenRegionProcedure eaf8a3af0c5f09e87d3e6300cee4017c, server=jenkins-hbase4.apache.org,42093,1689916451283 in 187 msec 2023-07-21 05:14:37,425 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d., pid=142, masterSystemTime=1689916477391 2023-07-21 05:14:37,425 DEBUG [StoreOpener-a981c509f844b0d1fabc9102609dd6d3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/a981c509f844b0d1fabc9102609dd6d3/f 2023-07-21 05:14:37,425 INFO [StoreOpener-a981c509f844b0d1fabc9102609dd6d3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a981c509f844b0d1fabc9102609dd6d3 columnFamilyName f 2023-07-21 05:14:37,425 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=eaf8a3af0c5f09e87d3e6300cee4017c, ASSIGN in 351 msec 2023-07-21 05:14:37,426 INFO [StoreOpener-a981c509f844b0d1fabc9102609dd6d3-1] regionserver.HStore(310): Store=a981c509f844b0d1fabc9102609dd6d3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:37,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d. 2023-07-21 05:14:37,426 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d. 2023-07-21 05:14:37,426 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=02496be51c62e7cee197c11d7f18573d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:37,427 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689916477426"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916477426"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916477426"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916477426"}]},"ts":"1689916477426"} 2023-07-21 05:14:37,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/a981c509f844b0d1fabc9102609dd6d3 2023-07-21 05:14:37,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/a981c509f844b0d1fabc9102609dd6d3 2023-07-21 05:14:37,432 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=135 2023-07-21 05:14:37,432 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=135, state=SUCCESS; OpenRegionProcedure 02496be51c62e7cee197c11d7f18573d, server=jenkins-hbase4.apache.org,42315,1689916451166 in 192 msec 2023-07-21 05:14:37,433 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=02496be51c62e7cee197c11d7f18573d, ASSIGN in 359 msec 2023-07-21 05:14:37,433 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a981c509f844b0d1fabc9102609dd6d3 2023-07-21 05:14:37,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/a981c509f844b0d1fabc9102609dd6d3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:37,436 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a981c509f844b0d1fabc9102609dd6d3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9454063360, jitterRate=-0.11952173709869385}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:37,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a981c509f844b0d1fabc9102609dd6d3: 2023-07-21 05:14:37,437 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3., pid=138, masterSystemTime=1689916477386 2023-07-21 05:14:37,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3. 2023-07-21 05:14:37,438 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3. 2023-07-21 05:14:37,438 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=a981c509f844b0d1fabc9102609dd6d3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:37,438 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689916477438"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916477438"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916477438"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916477438"}]},"ts":"1689916477438"} 2023-07-21 05:14:37,441 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=134 2023-07-21 05:14:37,441 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=134, state=SUCCESS; OpenRegionProcedure a981c509f844b0d1fabc9102609dd6d3, server=jenkins-hbase4.apache.org,42093,1689916451283 in 206 msec 2023-07-21 05:14:37,443 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-21 05:14:37,443 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a981c509f844b0d1fabc9102609dd6d3, ASSIGN in 368 msec 2023-07-21 05:14:37,444 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 05:14:37,444 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916477444"}]},"ts":"1689916477444"} 2023-07-21 05:14:37,445 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-21 05:14:37,447 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 05:14:37,449 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 846 msec 2023-07-21 05:14:37,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-21 05:14:37,710 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-21 05:14:37,710 DEBUG [Listener at localhost/34619] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-21 05:14:37,710 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:37,714 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-21 05:14:37,714 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:37,714 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-21 05:14:37,714 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:37,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-21 05:14:37,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 05:14:37,723 INFO [Listener at localhost/34619] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-21 05:14:37,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-21 05:14:37,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-21 05:14:37,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-21 05:14:37,727 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916477727"}]},"ts":"1689916477727"} 2023-07-21 05:14:37,728 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-21 05:14:37,730 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-21 05:14:37,730 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=eaf8a3af0c5f09e87d3e6300cee4017c, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a981c509f844b0d1fabc9102609dd6d3, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=02496be51c62e7cee197c11d7f18573d, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=55024b106350040b0ff76c2bfeb2a1c6, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c63ea6a49ddb24ae12dd463f9ba3188, UNASSIGN}] 2023-07-21 05:14:37,734 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=eaf8a3af0c5f09e87d3e6300cee4017c, UNASSIGN 2023-07-21 05:14:37,734 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a981c509f844b0d1fabc9102609dd6d3, UNASSIGN 2023-07-21 05:14:37,734 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=02496be51c62e7cee197c11d7f18573d, UNASSIGN 2023-07-21 05:14:37,735 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c63ea6a49ddb24ae12dd463f9ba3188, UNASSIGN 2023-07-21 05:14:37,735 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=55024b106350040b0ff76c2bfeb2a1c6, UNASSIGN 2023-07-21 05:14:37,735 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=eaf8a3af0c5f09e87d3e6300cee4017c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:37,735 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689916477735"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916477735"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916477735"}]},"ts":"1689916477735"} 2023-07-21 05:14:37,736 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=55024b106350040b0ff76c2bfeb2a1c6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:37,736 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=7c63ea6a49ddb24ae12dd463f9ba3188, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:37,736 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689916477736"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916477736"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916477736"}]},"ts":"1689916477736"} 2023-07-21 05:14:37,736 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689916477736"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916477736"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916477736"}]},"ts":"1689916477736"} 2023-07-21 05:14:37,736 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=a981c509f844b0d1fabc9102609dd6d3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:37,736 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689916477736"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916477736"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916477736"}]},"ts":"1689916477736"} 2023-07-21 05:14:37,736 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=02496be51c62e7cee197c11d7f18573d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:37,737 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689916477736"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916477736"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916477736"}]},"ts":"1689916477736"} 2023-07-21 05:14:37,739 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=144, state=RUNNABLE; CloseRegionProcedure eaf8a3af0c5f09e87d3e6300cee4017c, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:37,739 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=147, state=RUNNABLE; CloseRegionProcedure 55024b106350040b0ff76c2bfeb2a1c6, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:37,739 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=148, state=RUNNABLE; CloseRegionProcedure 7c63ea6a49ddb24ae12dd463f9ba3188, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:37,739 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=145, state=RUNNABLE; CloseRegionProcedure a981c509f844b0d1fabc9102609dd6d3, server=jenkins-hbase4.apache.org,42093,1689916451283}] 2023-07-21 05:14:37,740 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=146, state=RUNNABLE; CloseRegionProcedure 02496be51c62e7cee197c11d7f18573d, server=jenkins-hbase4.apache.org,42315,1689916451166}] 2023-07-21 05:14:37,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-21 05:14:37,892 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 55024b106350040b0ff76c2bfeb2a1c6 2023-07-21 05:14:37,892 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 02496be51c62e7cee197c11d7f18573d 2023-07-21 05:14:37,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 55024b106350040b0ff76c2bfeb2a1c6, disabling compactions & flushes 2023-07-21 05:14:37,895 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 02496be51c62e7cee197c11d7f18573d, disabling compactions & flushes 2023-07-21 05:14:37,895 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6. 2023-07-21 05:14:37,895 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d. 2023-07-21 05:14:37,895 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6. 2023-07-21 05:14:37,895 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d. 2023-07-21 05:14:37,895 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6. after waiting 0 ms 2023-07-21 05:14:37,895 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d. after waiting 0 ms 2023-07-21 05:14:37,895 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d. 2023-07-21 05:14:37,895 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6. 2023-07-21 05:14:37,899 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/02496be51c62e7cee197c11d7f18573d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:37,899 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/55024b106350040b0ff76c2bfeb2a1c6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:37,900 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d. 2023-07-21 05:14:37,900 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 02496be51c62e7cee197c11d7f18573d: 2023-07-21 05:14:37,900 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6. 2023-07-21 05:14:37,900 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 55024b106350040b0ff76c2bfeb2a1c6: 2023-07-21 05:14:37,901 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 02496be51c62e7cee197c11d7f18573d 2023-07-21 05:14:37,902 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7c63ea6a49ddb24ae12dd463f9ba3188 2023-07-21 05:14:37,903 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7c63ea6a49ddb24ae12dd463f9ba3188, disabling compactions & flushes 2023-07-21 05:14:37,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188. 2023-07-21 05:14:37,903 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188. 2023-07-21 05:14:37,903 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188. after waiting 0 ms 2023-07-21 05:14:37,903 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188. 2023-07-21 05:14:37,903 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=02496be51c62e7cee197c11d7f18573d, regionState=CLOSED 2023-07-21 05:14:37,903 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689916477903"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916477903"}]},"ts":"1689916477903"} 2023-07-21 05:14:37,904 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 55024b106350040b0ff76c2bfeb2a1c6 2023-07-21 05:14:37,904 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a981c509f844b0d1fabc9102609dd6d3 2023-07-21 05:14:37,907 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=146 2023-07-21 05:14:37,904 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=55024b106350040b0ff76c2bfeb2a1c6, regionState=CLOSED 2023-07-21 05:14:37,909 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689916477904"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916477904"}]},"ts":"1689916477904"} 2023-07-21 05:14:37,909 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=146, state=SUCCESS; CloseRegionProcedure 02496be51c62e7cee197c11d7f18573d, server=jenkins-hbase4.apache.org,42315,1689916451166 in 165 msec 2023-07-21 05:14:37,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a981c509f844b0d1fabc9102609dd6d3, disabling compactions & flushes 2023-07-21 05:14:37,909 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3. 2023-07-21 05:14:37,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3. 2023-07-21 05:14:37,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3. after waiting 0 ms 2023-07-21 05:14:37,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3. 2023-07-21 05:14:37,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/7c63ea6a49ddb24ae12dd463f9ba3188/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:37,910 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=02496be51c62e7cee197c11d7f18573d, UNASSIGN in 177 msec 2023-07-21 05:14:37,914 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188. 2023-07-21 05:14:37,914 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7c63ea6a49ddb24ae12dd463f9ba3188: 2023-07-21 05:14:37,917 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7c63ea6a49ddb24ae12dd463f9ba3188 2023-07-21 05:14:37,919 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=147 2023-07-21 05:14:37,919 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=147, state=SUCCESS; CloseRegionProcedure 55024b106350040b0ff76c2bfeb2a1c6, server=jenkins-hbase4.apache.org,42093,1689916451283 in 171 msec 2023-07-21 05:14:37,919 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=7c63ea6a49ddb24ae12dd463f9ba3188, regionState=CLOSED 2023-07-21 05:14:37,919 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689916477919"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916477919"}]},"ts":"1689916477919"} 2023-07-21 05:14:37,923 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/a981c509f844b0d1fabc9102609dd6d3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:37,924 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3. 2023-07-21 05:14:37,924 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a981c509f844b0d1fabc9102609dd6d3: 2023-07-21 05:14:37,928 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=a981c509f844b0d1fabc9102609dd6d3, regionState=CLOSED 2023-07-21 05:14:37,928 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=55024b106350040b0ff76c2bfeb2a1c6, UNASSIGN in 189 msec 2023-07-21 05:14:37,928 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689916477928"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916477928"}]},"ts":"1689916477928"} 2023-07-21 05:14:37,930 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a981c509f844b0d1fabc9102609dd6d3 2023-07-21 05:14:37,931 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close eaf8a3af0c5f09e87d3e6300cee4017c 2023-07-21 05:14:37,932 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing eaf8a3af0c5f09e87d3e6300cee4017c, disabling compactions & flushes 2023-07-21 05:14:37,932 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c. 2023-07-21 05:14:37,932 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c. 2023-07-21 05:14:37,932 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c. after waiting 0 ms 2023-07-21 05:14:37,932 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c. 2023-07-21 05:14:37,936 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=148 2023-07-21 05:14:37,936 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=148, state=SUCCESS; CloseRegionProcedure 7c63ea6a49ddb24ae12dd463f9ba3188, server=jenkins-hbase4.apache.org,42315,1689916451166 in 193 msec 2023-07-21 05:14:37,938 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=145 2023-07-21 05:14:37,938 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=145, state=SUCCESS; CloseRegionProcedure a981c509f844b0d1fabc9102609dd6d3, server=jenkins-hbase4.apache.org,42093,1689916451283 in 195 msec 2023-07-21 05:14:37,941 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a981c509f844b0d1fabc9102609dd6d3, UNASSIGN in 208 msec 2023-07-21 05:14:37,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/Group_testDisabledTableMove/eaf8a3af0c5f09e87d3e6300cee4017c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:37,942 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c63ea6a49ddb24ae12dd463f9ba3188, UNASSIGN in 210 msec 2023-07-21 05:14:37,943 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c. 2023-07-21 05:14:37,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for eaf8a3af0c5f09e87d3e6300cee4017c: 2023-07-21 05:14:37,945 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed eaf8a3af0c5f09e87d3e6300cee4017c 2023-07-21 05:14:37,945 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=eaf8a3af0c5f09e87d3e6300cee4017c, regionState=CLOSED 2023-07-21 05:14:37,946 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689916477945"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916477945"}]},"ts":"1689916477945"} 2023-07-21 05:14:37,951 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=144 2023-07-21 05:14:37,951 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=144, state=SUCCESS; CloseRegionProcedure eaf8a3af0c5f09e87d3e6300cee4017c, server=jenkins-hbase4.apache.org,42093,1689916451283 in 208 msec 2023-07-21 05:14:37,956 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=143 2023-07-21 05:14:37,956 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=eaf8a3af0c5f09e87d3e6300cee4017c, UNASSIGN in 221 msec 2023-07-21 05:14:37,956 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916477956"}]},"ts":"1689916477956"} 2023-07-21 05:14:37,958 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-21 05:14:37,960 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-21 05:14:37,964 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 238 msec 2023-07-21 05:14:38,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-21 05:14:38,029 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-21 05:14:38,030 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_867440834 2023-07-21 05:14:38,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_867440834 2023-07-21 05:14:38,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:38,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_867440834 2023-07-21 05:14:38,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:38,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:38,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-21 05:14:38,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_867440834, current retry=0 2023-07-21 05:14:38,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_867440834. 2023-07-21 05:14:38,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:38,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:38,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:38,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-21 05:14:38,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 05:14:38,044 INFO [Listener at localhost/34619] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-21 05:14:38,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-21 05:14:38,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:38,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 923 service: MasterService methodName: DisableTable size: 89 connection: 172.31.14.131:40408 deadline: 1689916538044, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-21 05:14:38,046 DEBUG [Listener at localhost/34619] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-21 05:14:38,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-21 05:14:38,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 05:14:38,049 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 05:14:38,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_867440834' 2023-07-21 05:14:38,050 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 05:14:38,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:38,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_867440834 2023-07-21 05:14:38,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:38,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:38,059 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/eaf8a3af0c5f09e87d3e6300cee4017c 2023-07-21 05:14:38,059 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/7c63ea6a49ddb24ae12dd463f9ba3188 2023-07-21 05:14:38,059 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/55024b106350040b0ff76c2bfeb2a1c6 2023-07-21 05:14:38,059 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/a981c509f844b0d1fabc9102609dd6d3 2023-07-21 05:14:38,059 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/02496be51c62e7cee197c11d7f18573d 2023-07-21 05:14:38,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-21 05:14:38,064 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/55024b106350040b0ff76c2bfeb2a1c6/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/55024b106350040b0ff76c2bfeb2a1c6/recovered.edits] 2023-07-21 05:14:38,064 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/a981c509f844b0d1fabc9102609dd6d3/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/a981c509f844b0d1fabc9102609dd6d3/recovered.edits] 2023-07-21 05:14:38,064 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/02496be51c62e7cee197c11d7f18573d/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/02496be51c62e7cee197c11d7f18573d/recovered.edits] 2023-07-21 05:14:38,064 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/eaf8a3af0c5f09e87d3e6300cee4017c/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/eaf8a3af0c5f09e87d3e6300cee4017c/recovered.edits] 2023-07-21 05:14:38,065 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/7c63ea6a49ddb24ae12dd463f9ba3188/f, FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/7c63ea6a49ddb24ae12dd463f9ba3188/recovered.edits] 2023-07-21 05:14:38,075 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/eaf8a3af0c5f09e87d3e6300cee4017c/recovered.edits/4.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/Group_testDisabledTableMove/eaf8a3af0c5f09e87d3e6300cee4017c/recovered.edits/4.seqid 2023-07-21 05:14:38,075 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/a981c509f844b0d1fabc9102609dd6d3/recovered.edits/4.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/Group_testDisabledTableMove/a981c509f844b0d1fabc9102609dd6d3/recovered.edits/4.seqid 2023-07-21 05:14:38,076 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/55024b106350040b0ff76c2bfeb2a1c6/recovered.edits/4.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/Group_testDisabledTableMove/55024b106350040b0ff76c2bfeb2a1c6/recovered.edits/4.seqid 2023-07-21 05:14:38,076 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/7c63ea6a49ddb24ae12dd463f9ba3188/recovered.edits/4.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/Group_testDisabledTableMove/7c63ea6a49ddb24ae12dd463f9ba3188/recovered.edits/4.seqid 2023-07-21 05:14:38,076 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/a981c509f844b0d1fabc9102609dd6d3 2023-07-21 05:14:38,076 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/eaf8a3af0c5f09e87d3e6300cee4017c 2023-07-21 05:14:38,077 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/55024b106350040b0ff76c2bfeb2a1c6 2023-07-21 05:14:38,077 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/7c63ea6a49ddb24ae12dd463f9ba3188 2023-07-21 05:14:38,077 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/02496be51c62e7cee197c11d7f18573d/recovered.edits/4.seqid to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/archive/data/default/Group_testDisabledTableMove/02496be51c62e7cee197c11d7f18573d/recovered.edits/4.seqid 2023-07-21 05:14:38,078 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/.tmp/data/default/Group_testDisabledTableMove/02496be51c62e7cee197c11d7f18573d 2023-07-21 05:14:38,078 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-21 05:14:38,080 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 05:14:38,082 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-21 05:14:38,087 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-21 05:14:38,092 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 05:14:38,093 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-21 05:14:38,093 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916478093"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:38,093 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916478093"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:38,093 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916478093"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:38,093 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916478093"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:38,093 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916478093"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:38,105 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 05:14:38,105 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => eaf8a3af0c5f09e87d3e6300cee4017c, NAME => 'Group_testDisabledTableMove,,1689916476601.eaf8a3af0c5f09e87d3e6300cee4017c.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => a981c509f844b0d1fabc9102609dd6d3, NAME => 'Group_testDisabledTableMove,aaaaa,1689916476601.a981c509f844b0d1fabc9102609dd6d3.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 02496be51c62e7cee197c11d7f18573d, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689916476601.02496be51c62e7cee197c11d7f18573d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 55024b106350040b0ff76c2bfeb2a1c6, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689916476601.55024b106350040b0ff76c2bfeb2a1c6.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 7c63ea6a49ddb24ae12dd463f9ba3188, NAME => 'Group_testDisabledTableMove,zzzzz,1689916476601.7c63ea6a49ddb24ae12dd463f9ba3188.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 05:14:38,105 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-21 05:14:38,106 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689916478106"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:38,108 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-21 05:14:38,111 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 05:14:38,113 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 65 msec 2023-07-21 05:14:38,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-21 05:14:38,165 INFO [Listener at localhost/34619] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-21 05:14:38,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:38,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:38,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:38,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:38,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:38,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:33541] to rsgroup default 2023-07-21 05:14:38,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:38,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_867440834 2023-07-21 05:14:38,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:38,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:38,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_867440834, current retry=0 2023-07-21 05:14:38,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33541,1689916455330, jenkins-hbase4.apache.org,40677,1689916451367] are moved back to Group_testDisabledTableMove_867440834 2023-07-21 05:14:38,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_867440834 => default 2023-07-21 05:14:38,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:38,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_867440834 2023-07-21 05:14:38,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:38,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:38,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 05:14:38,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:38,185 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:38,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:38,185 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:38,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:38,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:38,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:38,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:38,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:38,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:38,198 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:38,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:38,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:38,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:38,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:38,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:38,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:38,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:38,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42467] to rsgroup master 2023-07-21 05:14:38,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:38,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 957 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40408 deadline: 1689917678213, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. 2023-07-21 05:14:38,214 WARN [Listener at localhost/34619] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:38,216 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:38,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:38,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:38,217 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:42093, jenkins-hbase4.apache.org:42315], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:38,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:38,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:38,248 INFO [Listener at localhost/34619] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=516 (was 515) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1860509775_17 at /127.0.0.1:46308 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-284496683_17 at /127.0.0.1:44608 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xfdeaa0f-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x78ef668c-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=801 (was 775) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=438 (was 438), ProcessCount=174 (was 174), AvailableMemoryMB=3648 (was 3726) 2023-07-21 05:14:38,248 WARN [Listener at localhost/34619] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-21 05:14:38,285 INFO [Listener at localhost/34619] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=516, OpenFileDescriptor=801, MaxFileDescriptor=60000, SystemLoadAverage=438, ProcessCount=174, AvailableMemoryMB=3646 2023-07-21 05:14:38,286 WARN [Listener at localhost/34619] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-21 05:14:38,286 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-21 05:14:38,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:38,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:38,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:38,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:38,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:38,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:38,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:38,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:38,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:38,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:38,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:38,299 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:38,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:38,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:38,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:38,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:38,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:38,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:38,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:38,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42467] to rsgroup master 2023-07-21 05:14:38,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:38,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] ipc.CallRunner(144): callId: 985 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:40408 deadline: 1689917678316, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. 2023-07-21 05:14:38,317 WARN [Listener at localhost/34619] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42467 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:38,318 INFO [Listener at localhost/34619] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:38,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:38,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:38,319 INFO [Listener at localhost/34619] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33541, jenkins-hbase4.apache.org:40677, jenkins-hbase4.apache.org:42093, jenkins-hbase4.apache.org:42315], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:38,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:38,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42467] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:38,320 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 05:14:38,321 INFO [Listener at localhost/34619] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 05:14:38,321 DEBUG [Listener at localhost/34619] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x083f3c49 to 127.0.0.1:55013 2023-07-21 05:14:38,321 DEBUG [Listener at localhost/34619] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:38,322 DEBUG [Listener at localhost/34619] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 05:14:38,322 DEBUG [Listener at localhost/34619] util.JVMClusterUtil(257): Found active master hash=965985203, stopped=false 2023-07-21 05:14:38,322 DEBUG [Listener at localhost/34619] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 05:14:38,323 DEBUG [Listener at localhost/34619] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 05:14:38,323 INFO [Listener at localhost/34619] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,42467,1689916449058 2023-07-21 05:14:38,325 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:38,325 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:38,325 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:38,325 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:38,325 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:38,325 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:33541-0x101864d2058000b, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:38,325 INFO [Listener at localhost/34619] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 05:14:38,325 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:38,325 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:38,325 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:38,326 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:38,326 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33541-0x101864d2058000b, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:38,326 DEBUG [Listener at localhost/34619] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x546eadd7 to 127.0.0.1:55013 2023-07-21 05:14:38,326 DEBUG [Listener at localhost/34619] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:38,326 INFO [Listener at localhost/34619] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42315,1689916451166' ***** 2023-07-21 05:14:38,326 INFO [Listener at localhost/34619] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 05:14:38,326 INFO [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 05:14:38,327 INFO [Listener at localhost/34619] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42093,1689916451283' ***** 2023-07-21 05:14:38,329 INFO [Listener at localhost/34619] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 05:14:38,333 INFO [Listener at localhost/34619] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40677,1689916451367' ***** 2023-07-21 05:14:38,333 INFO [Listener at localhost/34619] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 05:14:38,333 INFO [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 05:14:38,333 INFO [Listener at localhost/34619] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33541,1689916455330' ***** 2023-07-21 05:14:38,333 INFO [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 05:14:38,334 INFO [Listener at localhost/34619] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 05:14:38,334 INFO [RS:3;jenkins-hbase4:33541] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 05:14:38,349 INFO [RS:2;jenkins-hbase4:40677] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@36a7cf96{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:38,349 INFO [RS:0;jenkins-hbase4:42315] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1364e664{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:38,349 INFO [RS:3;jenkins-hbase4:33541] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@69f161b2{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:38,349 INFO [RS:1;jenkins-hbase4:42093] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2d178a1{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:38,354 INFO [RS:0;jenkins-hbase4:42315] server.AbstractConnector(383): Stopped ServerConnector@6fc105c0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 05:14:38,354 INFO [RS:3;jenkins-hbase4:33541] server.AbstractConnector(383): Stopped ServerConnector@2373ab06{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 05:14:38,354 INFO [RS:2;jenkins-hbase4:40677] server.AbstractConnector(383): Stopped ServerConnector@1266d143{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 05:14:38,354 INFO [RS:3;jenkins-hbase4:33541] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 05:14:38,354 INFO [RS:1;jenkins-hbase4:42093] server.AbstractConnector(383): Stopped ServerConnector@4ccea9bd{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 05:14:38,354 INFO [RS:2;jenkins-hbase4:40677] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 05:14:38,354 INFO [RS:0;jenkins-hbase4:42315] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 05:14:38,356 INFO [RS:3;jenkins-hbase4:33541] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39ac2a37{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-21 05:14:38,357 INFO [RS:2;jenkins-hbase4:40677] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5e2294fa{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-21 05:14:38,355 INFO [RS:1;jenkins-hbase4:42093] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 05:14:38,357 INFO [RS:0;jenkins-hbase4:42315] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@623f7cf4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-21 05:14:38,359 INFO [RS:3;jenkins-hbase4:33541] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2a8106ca{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/hadoop.log.dir/,STOPPED} 2023-07-21 05:14:38,359 INFO [RS:1;jenkins-hbase4:42093] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39d35c00{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-21 05:14:38,360 INFO [RS:0;jenkins-hbase4:42315] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4784b602{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/hadoop.log.dir/,STOPPED} 2023-07-21 05:14:38,363 INFO [RS:0;jenkins-hbase4:42315] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 05:14:38,363 INFO [RS:3;jenkins-hbase4:33541] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 05:14:38,363 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 05:14:38,363 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 05:14:38,363 INFO [RS:3;jenkins-hbase4:33541] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 05:14:38,363 INFO [RS:3;jenkins-hbase4:33541] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 05:14:38,363 INFO [RS:0;jenkins-hbase4:42315] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 05:14:38,359 INFO [RS:2;jenkins-hbase4:40677] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5a76cee2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/hadoop.log.dir/,STOPPED} 2023-07-21 05:14:38,363 INFO [RS:0;jenkins-hbase4:42315] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 05:14:38,363 INFO [RS:3;jenkins-hbase4:33541] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:38,364 INFO [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(3305): Received CLOSE for e9f604e2452442c1f9af258e734bdc77 2023-07-21 05:14:38,364 DEBUG [RS:3;jenkins-hbase4:33541] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x28e182c6 to 127.0.0.1:55013 2023-07-21 05:14:38,364 DEBUG [RS:3;jenkins-hbase4:33541] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:38,364 INFO [RS:3;jenkins-hbase4:33541] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33541,1689916455330; all regions closed. 2023-07-21 05:14:38,364 INFO [RS:2;jenkins-hbase4:40677] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 05:14:38,365 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 05:14:38,365 INFO [RS:2;jenkins-hbase4:40677] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 05:14:38,368 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 05:14:38,367 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:38,367 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:38,367 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:38,367 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:38,366 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e9f604e2452442c1f9af258e734bdc77, disabling compactions & flushes 2023-07-21 05:14:38,366 INFO [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(3305): Received CLOSE for ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:38,366 INFO [RS:1;jenkins-hbase4:42093] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1ab64a2f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/hadoop.log.dir/,STOPPED} 2023-07-21 05:14:38,371 INFO [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(3305): Received CLOSE for c2dfaca75b68ed6d2ff1887a0a0f2c22 2023-07-21 05:14:38,371 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77. 2023-07-21 05:14:38,371 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77. 2023-07-21 05:14:38,371 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77. after waiting 0 ms 2023-07-21 05:14:38,371 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77. 2023-07-21 05:14:38,372 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e9f604e2452442c1f9af258e734bdc77 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-21 05:14:38,370 INFO [RS:2;jenkins-hbase4:40677] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 05:14:38,372 INFO [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:38,372 INFO [RS:1;jenkins-hbase4:42093] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 05:14:38,372 DEBUG [RS:2;jenkins-hbase4:40677] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x617ec7c3 to 127.0.0.1:55013 2023-07-21 05:14:38,372 DEBUG [RS:2;jenkins-hbase4:40677] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:38,372 INFO [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40677,1689916451367; all regions closed. 2023-07-21 05:14:38,371 INFO [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:38,372 INFO [RS:1;jenkins-hbase4:42093] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 05:14:38,373 DEBUG [RS:0;jenkins-hbase4:42315] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x097986d4 to 127.0.0.1:55013 2023-07-21 05:14:38,373 INFO [RS:1;jenkins-hbase4:42093] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 05:14:38,373 INFO [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(3305): Received CLOSE for 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:38,373 INFO [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:38,373 DEBUG [RS:1;jenkins-hbase4:42093] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x71eee8f5 to 127.0.0.1:55013 2023-07-21 05:14:38,373 DEBUG [RS:1;jenkins-hbase4:42093] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:38,373 INFO [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 05:14:38,373 DEBUG [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(1478): Online Regions={55eed975c710f1801bb4aedb9ff16d4c=testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c.} 2023-07-21 05:14:38,374 DEBUG [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(1504): Waiting on 55eed975c710f1801bb4aedb9ff16d4c 2023-07-21 05:14:38,373 DEBUG [RS:0;jenkins-hbase4:42315] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:38,374 INFO [RS:0;jenkins-hbase4:42315] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 05:14:38,374 INFO [RS:0;jenkins-hbase4:42315] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 05:14:38,374 INFO [RS:0;jenkins-hbase4:42315] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 05:14:38,374 INFO [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 05:14:38,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 55eed975c710f1801bb4aedb9ff16d4c, disabling compactions & flushes 2023-07-21 05:14:38,376 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:38,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:38,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. after waiting 0 ms 2023-07-21 05:14:38,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:38,376 INFO [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-21 05:14:38,377 DEBUG [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(1478): Online Regions={e9f604e2452442c1f9af258e734bdc77=hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77., ede3ac9f206f1997341b19733c39fd22=hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22., 1588230740=hbase:meta,,1.1588230740, c2dfaca75b68ed6d2ff1887a0a0f2c22=unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22.} 2023-07-21 05:14:38,377 DEBUG [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(1504): Waiting on 1588230740, c2dfaca75b68ed6d2ff1887a0a0f2c22, e9f604e2452442c1f9af258e734bdc77, ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:38,377 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 05:14:38,377 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 05:14:38,377 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 05:14:38,377 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 05:14:38,377 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 05:14:38,377 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=37.46 KB heapSize=61.09 KB 2023-07-21 05:14:38,402 DEBUG [RS:3;jenkins-hbase4:33541] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/oldWALs 2023-07-21 05:14:38,402 INFO [RS:3;jenkins-hbase4:33541] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33541%2C1689916455330:(num 1689916455657) 2023-07-21 05:14:38,402 DEBUG [RS:3;jenkins-hbase4:33541] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:38,402 INFO [RS:3;jenkins-hbase4:33541] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:38,500 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/testRename/55eed975c710f1801bb4aedb9ff16d4c/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 05:14:38,501 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:38,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 55eed975c710f1801bb4aedb9ff16d4c: 2023-07-21 05:14:38,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689916471026.55eed975c710f1801bb4aedb9ff16d4c. 2023-07-21 05:14:38,535 INFO [RS:3;jenkins-hbase4:33541] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 05:14:38,549 INFO [RS:3;jenkins-hbase4:33541] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 05:14:38,549 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 05:14:38,549 INFO [RS:3;jenkins-hbase4:33541] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 05:14:38,549 INFO [RS:3;jenkins-hbase4:33541] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 05:14:38,555 INFO [RS:3;jenkins-hbase4:33541] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33541 2023-07-21 05:14:38,561 DEBUG [RS:2;jenkins-hbase4:40677] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/oldWALs 2023-07-21 05:14:38,561 INFO [RS:2;jenkins-hbase4:40677] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40677%2C1689916451367:(num 1689916453453) 2023-07-21 05:14:38,561 DEBUG [RS:2;jenkins-hbase4:40677] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:38,561 INFO [RS:2;jenkins-hbase4:40677] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:38,574 INFO [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42093,1689916451283; all regions closed. 2023-07-21 05:14:38,577 DEBUG [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(1504): Waiting on 1588230740, c2dfaca75b68ed6d2ff1887a0a0f2c22, e9f604e2452442c1f9af258e734bdc77, ede3ac9f206f1997341b19733c39fd22 2023-07-21 05:14:38,595 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:38,595 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:33541-0x101864d2058000b, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:38,595 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:38,595 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:38,595 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33541,1689916455330 2023-07-21 05:14:38,595 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:38,595 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:38,595 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:38,595 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:33541-0x101864d2058000b, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:38,596 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33541,1689916455330] 2023-07-21 05:14:38,596 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33541,1689916455330; numProcessing=1 2023-07-21 05:14:38,597 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33541,1689916455330 already deleted, retry=false 2023-07-21 05:14:38,597 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33541,1689916455330 expired; onlineServers=3 2023-07-21 05:14:38,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/namespace/e9f604e2452442c1f9af258e734bdc77/.tmp/info/44fc6b380f4f4f14a7557d8e81fe5fd9 2023-07-21 05:14:38,601 INFO [RS:2;jenkins-hbase4:40677] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 05:14:38,611 INFO [RS:2;jenkins-hbase4:40677] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 05:14:38,611 INFO [RS:2;jenkins-hbase4:40677] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 05:14:38,612 INFO [RS:2;jenkins-hbase4:40677] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 05:14:38,611 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 05:14:38,613 INFO [RS:2;jenkins-hbase4:40677] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40677 2023-07-21 05:14:38,619 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:38,619 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:38,619 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40677,1689916451367 2023-07-21 05:14:38,619 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:38,620 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40677,1689916451367] 2023-07-21 05:14:38,620 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40677,1689916451367; numProcessing=2 2023-07-21 05:14:38,622 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40677,1689916451367 already deleted, retry=false 2023-07-21 05:14:38,622 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40677,1689916451367 expired; onlineServers=2 2023-07-21 05:14:38,623 DEBUG [RS:1;jenkins-hbase4:42093] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/oldWALs 2023-07-21 05:14:38,623 INFO [RS:1;jenkins-hbase4:42093] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42093%2C1689916451283.meta:.meta(num 1689916453600) 2023-07-21 05:14:38,636 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=34.54 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/.tmp/info/60aa575e30064096a76d1f2ac40bf1bd 2023-07-21 05:14:38,637 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/namespace/e9f604e2452442c1f9af258e734bdc77/.tmp/info/44fc6b380f4f4f14a7557d8e81fe5fd9 as hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/namespace/e9f604e2452442c1f9af258e734bdc77/info/44fc6b380f4f4f14a7557d8e81fe5fd9 2023-07-21 05:14:38,640 DEBUG [RS:1;jenkins-hbase4:42093] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/oldWALs 2023-07-21 05:14:38,640 INFO [RS:1;jenkins-hbase4:42093] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42093%2C1689916451283:(num 1689916453453) 2023-07-21 05:14:38,640 DEBUG [RS:1;jenkins-hbase4:42093] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:38,640 INFO [RS:1;jenkins-hbase4:42093] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:38,641 INFO [RS:1;jenkins-hbase4:42093] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 05:14:38,641 INFO [RS:1;jenkins-hbase4:42093] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 05:14:38,641 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 05:14:38,641 INFO [RS:1;jenkins-hbase4:42093] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 05:14:38,641 INFO [RS:1;jenkins-hbase4:42093] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 05:14:38,642 INFO [RS:1;jenkins-hbase4:42093] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42093 2023-07-21 05:14:38,647 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 60aa575e30064096a76d1f2ac40bf1bd 2023-07-21 05:14:38,647 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/namespace/e9f604e2452442c1f9af258e734bdc77/info/44fc6b380f4f4f14a7557d8e81fe5fd9, entries=2, sequenceid=6, filesize=4.8 K 2023-07-21 05:14:38,648 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for e9f604e2452442c1f9af258e734bdc77 in 277ms, sequenceid=6, compaction requested=false 2023-07-21 05:14:38,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/namespace/e9f604e2452442c1f9af258e734bdc77/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-21 05:14:38,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77. 2023-07-21 05:14:38,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e9f604e2452442c1f9af258e734bdc77: 2023-07-21 05:14:38,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689916453857.e9f604e2452442c1f9af258e734bdc77. 2023-07-21 05:14:38,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ede3ac9f206f1997341b19733c39fd22, disabling compactions & flushes 2023-07-21 05:14:38,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:38,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:38,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. after waiting 0 ms 2023-07-21 05:14:38,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:38,666 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing ede3ac9f206f1997341b19733c39fd22 1/1 column families, dataSize=22.07 KB heapSize=36.54 KB 2023-07-21 05:14:38,686 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.07 KB at sequenceid=107 (bloomFilter=true), to=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/.tmp/m/e7ad090c96454f3980216d27cedbeedd 2023-07-21 05:14:38,687 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/.tmp/rep_barrier/ce2c9686683a45afb2ba9802f0284c2d 2023-07-21 05:14:38,692 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e7ad090c96454f3980216d27cedbeedd 2023-07-21 05:14:38,693 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ce2c9686683a45afb2ba9802f0284c2d 2023-07-21 05:14:38,693 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/.tmp/m/e7ad090c96454f3980216d27cedbeedd as hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/m/e7ad090c96454f3980216d27cedbeedd 2023-07-21 05:14:38,701 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e7ad090c96454f3980216d27cedbeedd 2023-07-21 05:14:38,701 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/m/e7ad090c96454f3980216d27cedbeedd, entries=22, sequenceid=107, filesize=5.9 K 2023-07-21 05:14:38,702 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.07 KB/22601, heapSize ~36.52 KB/37400, currentSize=0 B/0 for ede3ac9f206f1997341b19733c39fd22 in 36ms, sequenceid=107, compaction requested=true 2023-07-21 05:14:38,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 05:14:38,717 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/.tmp/table/a3611963c9f544408dda1446bd6828e7 2023-07-21 05:14:38,724 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/rsgroup/ede3ac9f206f1997341b19733c39fd22/recovered.edits/110.seqid, newMaxSeqId=110, maxSeqId=35 2023-07-21 05:14:38,724 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 05:14:38,725 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:38,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ede3ac9f206f1997341b19733c39fd22: 2023-07-21 05:14:38,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689916453969.ede3ac9f206f1997341b19733c39fd22. 2023-07-21 05:14:38,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c2dfaca75b68ed6d2ff1887a0a0f2c22, disabling compactions & flushes 2023-07-21 05:14:38,725 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:38,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:38,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. after waiting 0 ms 2023-07-21 05:14:38,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:38,726 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a3611963c9f544408dda1446bd6828e7 2023-07-21 05:14:38,727 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/.tmp/info/60aa575e30064096a76d1f2ac40bf1bd as hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/info/60aa575e30064096a76d1f2ac40bf1bd 2023-07-21 05:14:38,729 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:38,729 INFO [RS:2;jenkins-hbase4:40677] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40677,1689916451367; zookeeper connection closed. 2023-07-21 05:14:38,729 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:40677-0x101864d20580003, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:38,731 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:38,731 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:38,731 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42093,1689916451283 2023-07-21 05:14:38,732 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42093,1689916451283] 2023-07-21 05:14:38,732 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42093,1689916451283; numProcessing=3 2023-07-21 05:14:38,734 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42093,1689916451283 already deleted, retry=false 2023-07-21 05:14:38,734 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42093,1689916451283 expired; onlineServers=1 2023-07-21 05:14:38,734 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 60aa575e30064096a76d1f2ac40bf1bd 2023-07-21 05:14:38,734 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/info/60aa575e30064096a76d1f2ac40bf1bd, entries=62, sequenceid=210, filesize=11.8 K 2023-07-21 05:14:38,735 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4c102b7a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4c102b7a 2023-07-21 05:14:38,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/default/unmovedTable/c2dfaca75b68ed6d2ff1887a0a0f2c22/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 05:14:38,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:38,737 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/.tmp/rep_barrier/ce2c9686683a45afb2ba9802f0284c2d as hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/rep_barrier/ce2c9686683a45afb2ba9802f0284c2d 2023-07-21 05:14:38,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c2dfaca75b68ed6d2ff1887a0a0f2c22: 2023-07-21 05:14:38,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689916472686.c2dfaca75b68ed6d2ff1887a0a0f2c22. 2023-07-21 05:14:38,745 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ce2c9686683a45afb2ba9802f0284c2d 2023-07-21 05:14:38,745 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/rep_barrier/ce2c9686683a45afb2ba9802f0284c2d, entries=8, sequenceid=210, filesize=5.8 K 2023-07-21 05:14:38,746 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/.tmp/table/a3611963c9f544408dda1446bd6828e7 as hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/table/a3611963c9f544408dda1446bd6828e7 2023-07-21 05:14:38,753 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a3611963c9f544408dda1446bd6828e7 2023-07-21 05:14:38,753 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/table/a3611963c9f544408dda1446bd6828e7, entries=16, sequenceid=210, filesize=6.0 K 2023-07-21 05:14:38,754 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~37.46 KB/38356, heapSize ~61.05 KB/62512, currentSize=0 B/0 for 1588230740 in 377ms, sequenceid=210, compaction requested=false 2023-07-21 05:14:38,754 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 05:14:38,770 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/data/hbase/meta/1588230740/recovered.edits/213.seqid, newMaxSeqId=213, maxSeqId=98 2023-07-21 05:14:38,770 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 05:14:38,771 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 05:14:38,771 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 05:14:38,771 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 05:14:38,777 INFO [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42315,1689916451166; all regions closed. 2023-07-21 05:14:38,783 DEBUG [RS:0;jenkins-hbase4:42315] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/oldWALs 2023-07-21 05:14:38,784 INFO [RS:0;jenkins-hbase4:42315] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42315%2C1689916451166.meta:.meta(num 1689916462096) 2023-07-21 05:14:38,794 DEBUG [RS:0;jenkins-hbase4:42315] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/oldWALs 2023-07-21 05:14:38,795 INFO [RS:0;jenkins-hbase4:42315] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42315%2C1689916451166:(num 1689916453451) 2023-07-21 05:14:38,795 DEBUG [RS:0;jenkins-hbase4:42315] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:38,795 INFO [RS:0;jenkins-hbase4:42315] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:38,795 INFO [RS:0;jenkins-hbase4:42315] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 05:14:38,795 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 05:14:38,796 INFO [RS:0;jenkins-hbase4:42315] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42315 2023-07-21 05:14:38,799 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:38,799 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42315,1689916451166 2023-07-21 05:14:38,800 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42315,1689916451166] 2023-07-21 05:14:38,800 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42315,1689916451166; numProcessing=4 2023-07-21 05:14:38,900 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:38,900 INFO [RS:0;jenkins-hbase4:42315] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42315,1689916451166; zookeeper connection closed. 2023-07-21 05:14:38,900 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42315-0x101864d20580001, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:38,901 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@760d5be3] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@760d5be3 2023-07-21 05:14:38,901 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42315,1689916451166 already deleted, retry=false 2023-07-21 05:14:38,902 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42315,1689916451166 expired; onlineServers=0 2023-07-21 05:14:38,902 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42467,1689916449058' ***** 2023-07-21 05:14:38,902 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 05:14:38,903 DEBUG [M:0;jenkins-hbase4:42467] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@78aec988, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 05:14:38,903 INFO [M:0;jenkins-hbase4:42467] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 05:14:38,906 INFO [M:0;jenkins-hbase4:42467] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@27ce108a{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-21 05:14:38,906 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 05:14:38,906 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:38,906 INFO [M:0;jenkins-hbase4:42467] server.AbstractConnector(383): Stopped ServerConnector@93a89c0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 05:14:38,906 INFO [M:0;jenkins-hbase4:42467] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 05:14:38,907 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 05:14:38,907 INFO [M:0;jenkins-hbase4:42467] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3e41c305{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-21 05:14:38,908 INFO [M:0;jenkins-hbase4:42467] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@32d90619{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/hadoop.log.dir/,STOPPED} 2023-07-21 05:14:38,911 INFO [M:0;jenkins-hbase4:42467] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42467,1689916449058 2023-07-21 05:14:38,911 INFO [M:0;jenkins-hbase4:42467] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42467,1689916449058; all regions closed. 2023-07-21 05:14:38,911 DEBUG [M:0;jenkins-hbase4:42467] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:38,911 INFO [M:0;jenkins-hbase4:42467] master.HMaster(1491): Stopping master jetty server 2023-07-21 05:14:38,911 INFO [M:0;jenkins-hbase4:42467] server.AbstractConnector(383): Stopped ServerConnector@69fc8b39{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 05:14:38,912 DEBUG [M:0;jenkins-hbase4:42467] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 05:14:38,912 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 05:14:38,912 DEBUG [M:0;jenkins-hbase4:42467] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 05:14:38,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689916452914] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689916452914,5,FailOnTimeoutGroup] 2023-07-21 05:14:38,912 INFO [M:0;jenkins-hbase4:42467] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 05:14:38,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689916452911] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689916452911,5,FailOnTimeoutGroup] 2023-07-21 05:14:38,912 INFO [M:0;jenkins-hbase4:42467] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 05:14:38,912 INFO [M:0;jenkins-hbase4:42467] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-21 05:14:38,912 DEBUG [M:0;jenkins-hbase4:42467] master.HMaster(1512): Stopping service threads 2023-07-21 05:14:38,913 INFO [M:0;jenkins-hbase4:42467] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 05:14:38,913 ERROR [M:0;jenkins-hbase4:42467] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-21 05:14:38,913 INFO [M:0;jenkins-hbase4:42467] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 05:14:38,913 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 05:14:38,914 DEBUG [M:0;jenkins-hbase4:42467] zookeeper.ZKUtil(398): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 05:14:38,914 WARN [M:0;jenkins-hbase4:42467] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 05:14:38,914 INFO [M:0;jenkins-hbase4:42467] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 05:14:38,914 INFO [M:0;jenkins-hbase4:42467] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 05:14:38,914 DEBUG [M:0;jenkins-hbase4:42467] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 05:14:38,914 INFO [M:0;jenkins-hbase4:42467] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:38,914 DEBUG [M:0;jenkins-hbase4:42467] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:38,914 DEBUG [M:0;jenkins-hbase4:42467] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 05:14:38,914 DEBUG [M:0;jenkins-hbase4:42467] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:38,914 INFO [M:0;jenkins-hbase4:42467] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.12 KB heapSize=621.24 KB 2023-07-21 05:14:38,926 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:38,926 INFO [RS:1;jenkins-hbase4:42093] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42093,1689916451283; zookeeper connection closed. 2023-07-21 05:14:38,926 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:42093-0x101864d20580002, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:38,926 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5e282510] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5e282510 2023-07-21 05:14:38,930 INFO [M:0;jenkins-hbase4:42467] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.12 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d335b02536294366a11693343ea3794e 2023-07-21 05:14:38,938 DEBUG [M:0;jenkins-hbase4:42467] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d335b02536294366a11693343ea3794e as hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d335b02536294366a11693343ea3794e 2023-07-21 05:14:38,944 INFO [M:0;jenkins-hbase4:42467] regionserver.HStore(1080): Added hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d335b02536294366a11693343ea3794e, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-21 05:14:38,945 INFO [M:0;jenkins-hbase4:42467] regionserver.HRegion(2948): Finished flush of dataSize ~519.12 KB/531577, heapSize ~621.23 KB/636136, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 31ms, sequenceid=1152, compaction requested=false 2023-07-21 05:14:38,946 INFO [M:0;jenkins-hbase4:42467] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:38,947 DEBUG [M:0;jenkins-hbase4:42467] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 05:14:38,951 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 05:14:38,951 INFO [M:0;jenkins-hbase4:42467] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 05:14:38,951 INFO [M:0;jenkins-hbase4:42467] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42467 2023-07-21 05:14:38,953 DEBUG [M:0;jenkins-hbase4:42467] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,42467,1689916449058 already deleted, retry=false 2023-07-21 05:14:39,026 INFO [RS:3;jenkins-hbase4:33541] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33541,1689916455330; zookeeper connection closed. 2023-07-21 05:14:39,026 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:33541-0x101864d2058000b, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:39,026 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): regionserver:33541-0x101864d2058000b, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:39,032 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@774459e9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@774459e9 2023-07-21 05:14:39,032 INFO [Listener at localhost/34619] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-21 05:14:39,126 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:39,126 INFO [M:0;jenkins-hbase4:42467] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42467,1689916449058; zookeeper connection closed. 2023-07-21 05:14:39,127 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): master:42467-0x101864d20580000, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:39,129 WARN [Listener at localhost/34619] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 05:14:39,136 INFO [Listener at localhost/34619] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 05:14:39,240 WARN [BP-491990667-172.31.14.131-1689916444933 heartbeating to localhost/127.0.0.1:38517] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 05:14:39,240 WARN [BP-491990667-172.31.14.131-1689916444933 heartbeating to localhost/127.0.0.1:38517] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-491990667-172.31.14.131-1689916444933 (Datanode Uuid f31f3cc6-e5a8-454e-9c55-b614b45314e8) service to localhost/127.0.0.1:38517 2023-07-21 05:14:39,242 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/cluster_4e79e38a-666f-bc42-a998-45a19ecc7c64/dfs/data/data5/current/BP-491990667-172.31.14.131-1689916444933] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:39,242 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/cluster_4e79e38a-666f-bc42-a998-45a19ecc7c64/dfs/data/data6/current/BP-491990667-172.31.14.131-1689916444933] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:39,244 WARN [Listener at localhost/34619] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 05:14:39,246 INFO [Listener at localhost/34619] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 05:14:39,253 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 05:14:39,253 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 05:14:39,253 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 05:14:39,350 WARN [BP-491990667-172.31.14.131-1689916444933 heartbeating to localhost/127.0.0.1:38517] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 05:14:39,350 WARN [BP-491990667-172.31.14.131-1689916444933 heartbeating to localhost/127.0.0.1:38517] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-491990667-172.31.14.131-1689916444933 (Datanode Uuid 4efe0456-d9e9-4901-b03c-557bd4813d3f) service to localhost/127.0.0.1:38517 2023-07-21 05:14:39,350 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/cluster_4e79e38a-666f-bc42-a998-45a19ecc7c64/dfs/data/data3/current/BP-491990667-172.31.14.131-1689916444933] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:39,351 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/cluster_4e79e38a-666f-bc42-a998-45a19ecc7c64/dfs/data/data4/current/BP-491990667-172.31.14.131-1689916444933] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:39,352 WARN [Listener at localhost/34619] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 05:14:39,354 INFO [Listener at localhost/34619] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 05:14:39,457 WARN [BP-491990667-172.31.14.131-1689916444933 heartbeating to localhost/127.0.0.1:38517] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 05:14:39,457 WARN [BP-491990667-172.31.14.131-1689916444933 heartbeating to localhost/127.0.0.1:38517] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-491990667-172.31.14.131-1689916444933 (Datanode Uuid a790b6aa-49c2-4d0b-9db3-64fba725a48a) service to localhost/127.0.0.1:38517 2023-07-21 05:14:39,457 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/cluster_4e79e38a-666f-bc42-a998-45a19ecc7c64/dfs/data/data1/current/BP-491990667-172.31.14.131-1689916444933] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:39,458 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/cluster_4e79e38a-666f-bc42-a998-45a19ecc7c64/dfs/data/data2/current/BP-491990667-172.31.14.131-1689916444933] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:39,486 INFO [Listener at localhost/34619] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 05:14:39,606 INFO [Listener at localhost/34619] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 05:14:39,656 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-21 05:14:39,657 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 05:14:39,657 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/hadoop.log.dir so I do NOT create it in target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882 2023-07-21 05:14:39,657 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2d9b6ca7-fe06-e267-b153-bf522362f645/hadoop.tmp.dir so I do NOT create it in target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882 2023-07-21 05:14:39,657 INFO [Listener at localhost/34619] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/cluster_50581fd3-74e7-03b1-a21e-3a0135a2efb1, deleteOnExit=true 2023-07-21 05:14:39,657 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 05:14:39,657 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/test.cache.data in system properties and HBase conf 2023-07-21 05:14:39,657 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 05:14:39,657 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/hadoop.log.dir in system properties and HBase conf 2023-07-21 05:14:39,657 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 05:14:39,658 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 05:14:39,658 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 05:14:39,658 DEBUG [Listener at localhost/34619] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 05:14:39,658 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 05:14:39,658 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 05:14:39,658 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 05:14:39,658 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 05:14:39,659 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 05:14:39,659 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 05:14:39,659 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 05:14:39,659 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 05:14:39,659 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 05:14:39,659 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/nfs.dump.dir in system properties and HBase conf 2023-07-21 05:14:39,659 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/java.io.tmpdir in system properties and HBase conf 2023-07-21 05:14:39,659 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 05:14:39,659 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 05:14:39,659 INFO [Listener at localhost/34619] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 05:14:39,664 WARN [Listener at localhost/34619] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 05:14:39,665 WARN [Listener at localhost/34619] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 05:14:39,702 DEBUG [Listener at localhost/34619-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101864d2058000a, quorum=127.0.0.1:55013, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-21 05:14:39,703 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101864d2058000a, quorum=127.0.0.1:55013, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-21 05:14:39,709 WARN [Listener at localhost/34619] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-21 05:14:39,761 WARN [Listener at localhost/34619] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 05:14:39,763 INFO [Listener at localhost/34619] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 05:14:39,769 INFO [Listener at localhost/34619] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/java.io.tmpdir/Jetty_localhost_33847_hdfs____m088hx/webapp 2023-07-21 05:14:39,864 INFO [Listener at localhost/34619] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33847 2023-07-21 05:14:39,869 WARN [Listener at localhost/34619] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 05:14:39,869 WARN [Listener at localhost/34619] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 05:14:39,908 WARN [Listener at localhost/37015] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 05:14:39,920 WARN [Listener at localhost/37015] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 05:14:39,922 WARN [Listener at localhost/37015] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 05:14:39,924 INFO [Listener at localhost/37015] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 05:14:39,929 INFO [Listener at localhost/37015] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/java.io.tmpdir/Jetty_localhost_44323_datanode____mvte6z/webapp 2023-07-21 05:14:40,023 INFO [Listener at localhost/37015] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44323 2023-07-21 05:14:40,030 WARN [Listener at localhost/33387] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 05:14:40,044 WARN [Listener at localhost/33387] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 05:14:40,047 WARN [Listener at localhost/33387] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 05:14:40,048 INFO [Listener at localhost/33387] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 05:14:40,051 INFO [Listener at localhost/33387] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/java.io.tmpdir/Jetty_localhost_43459_datanode____228hgg/webapp 2023-07-21 05:14:40,139 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe386b77c890aaf50: Processing first storage report for DS-fa7c24f2-afc7-4c32-82f0-f5c9db20c685 from datanode 5bedb4b4-0ac8-4728-8a89-4a5551b8d750 2023-07-21 05:14:40,139 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe386b77c890aaf50: from storage DS-fa7c24f2-afc7-4c32-82f0-f5c9db20c685 node DatanodeRegistration(127.0.0.1:33595, datanodeUuid=5bedb4b4-0ac8-4728-8a89-4a5551b8d750, infoPort=40707, infoSecurePort=0, ipcPort=33387, storageInfo=lv=-57;cid=testClusterID;nsid=1383590568;c=1689916479667), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 05:14:40,140 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe386b77c890aaf50: Processing first storage report for DS-33671e85-45a1-40a9-90f9-74dcbe5586ec from datanode 5bedb4b4-0ac8-4728-8a89-4a5551b8d750 2023-07-21 05:14:40,140 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe386b77c890aaf50: from storage DS-33671e85-45a1-40a9-90f9-74dcbe5586ec node DatanodeRegistration(127.0.0.1:33595, datanodeUuid=5bedb4b4-0ac8-4728-8a89-4a5551b8d750, infoPort=40707, infoSecurePort=0, ipcPort=33387, storageInfo=lv=-57;cid=testClusterID;nsid=1383590568;c=1689916479667), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 05:14:40,173 INFO [Listener at localhost/33387] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43459 2023-07-21 05:14:40,180 WARN [Listener at localhost/44969] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 05:14:40,193 WARN [Listener at localhost/44969] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 05:14:40,196 WARN [Listener at localhost/44969] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 05:14:40,198 INFO [Listener at localhost/44969] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 05:14:40,204 INFO [Listener at localhost/44969] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/java.io.tmpdir/Jetty_localhost_43163_datanode____j1sbw6/webapp 2023-07-21 05:14:40,297 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd280c0f8bf60d811: Processing first storage report for DS-2e84e184-abf4-437d-be8d-c1982675d7bb from datanode 2af665ec-186d-4352-a73f-4a965871ad0b 2023-07-21 05:14:40,297 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd280c0f8bf60d811: from storage DS-2e84e184-abf4-437d-be8d-c1982675d7bb node DatanodeRegistration(127.0.0.1:45331, datanodeUuid=2af665ec-186d-4352-a73f-4a965871ad0b, infoPort=37891, infoSecurePort=0, ipcPort=44969, storageInfo=lv=-57;cid=testClusterID;nsid=1383590568;c=1689916479667), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 05:14:40,297 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd280c0f8bf60d811: Processing first storage report for DS-34058c5c-4863-457b-ab46-0cc38b79455d from datanode 2af665ec-186d-4352-a73f-4a965871ad0b 2023-07-21 05:14:40,297 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd280c0f8bf60d811: from storage DS-34058c5c-4863-457b-ab46-0cc38b79455d node DatanodeRegistration(127.0.0.1:45331, datanodeUuid=2af665ec-186d-4352-a73f-4a965871ad0b, infoPort=37891, infoSecurePort=0, ipcPort=44969, storageInfo=lv=-57;cid=testClusterID;nsid=1383590568;c=1689916479667), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 05:14:40,310 INFO [Listener at localhost/44969] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43163 2023-07-21 05:14:40,318 WARN [Listener at localhost/40271] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 05:14:40,430 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6ed88823168086dd: Processing first storage report for DS-4190f524-e513-4e34-96e6-0bfb064620ec from datanode 4663a5df-d10e-4a7e-90d9-aa5d7b13cf1a 2023-07-21 05:14:40,430 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6ed88823168086dd: from storage DS-4190f524-e513-4e34-96e6-0bfb064620ec node DatanodeRegistration(127.0.0.1:38827, datanodeUuid=4663a5df-d10e-4a7e-90d9-aa5d7b13cf1a, infoPort=43121, infoSecurePort=0, ipcPort=40271, storageInfo=lv=-57;cid=testClusterID;nsid=1383590568;c=1689916479667), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 05:14:40,430 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6ed88823168086dd: Processing first storage report for DS-e48d6b47-5389-4a64-ab4d-d36fdda37d09 from datanode 4663a5df-d10e-4a7e-90d9-aa5d7b13cf1a 2023-07-21 05:14:40,430 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6ed88823168086dd: from storage DS-e48d6b47-5389-4a64-ab4d-d36fdda37d09 node DatanodeRegistration(127.0.0.1:38827, datanodeUuid=4663a5df-d10e-4a7e-90d9-aa5d7b13cf1a, infoPort=43121, infoSecurePort=0, ipcPort=40271, storageInfo=lv=-57;cid=testClusterID;nsid=1383590568;c=1689916479667), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 05:14:40,531 DEBUG [Listener at localhost/40271] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882 2023-07-21 05:14:40,534 INFO [Listener at localhost/40271] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/cluster_50581fd3-74e7-03b1-a21e-3a0135a2efb1/zookeeper_0, clientPort=60035, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/cluster_50581fd3-74e7-03b1-a21e-3a0135a2efb1/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/cluster_50581fd3-74e7-03b1-a21e-3a0135a2efb1/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 05:14:40,535 INFO [Listener at localhost/40271] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=60035 2023-07-21 05:14:40,536 INFO [Listener at localhost/40271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:40,537 INFO [Listener at localhost/40271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:40,558 INFO [Listener at localhost/40271] util.FSUtils(471): Created version file at hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9 with version=8 2023-07-21 05:14:40,558 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/hbase-staging 2023-07-21 05:14:40,559 DEBUG [Listener at localhost/40271] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 05:14:40,559 DEBUG [Listener at localhost/40271] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 05:14:40,560 DEBUG [Listener at localhost/40271] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 05:14:40,560 DEBUG [Listener at localhost/40271] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 05:14:40,561 INFO [Listener at localhost/40271] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 05:14:40,561 INFO [Listener at localhost/40271] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:40,561 INFO [Listener at localhost/40271] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:40,561 INFO [Listener at localhost/40271] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 05:14:40,561 INFO [Listener at localhost/40271] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:40,561 INFO [Listener at localhost/40271] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 05:14:40,561 INFO [Listener at localhost/40271] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 05:14:40,562 INFO [Listener at localhost/40271] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42797 2023-07-21 05:14:40,563 INFO [Listener at localhost/40271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:40,564 INFO [Listener at localhost/40271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:40,565 INFO [Listener at localhost/40271] zookeeper.RecoverableZooKeeper(93): Process identifier=master:42797 connecting to ZooKeeper ensemble=127.0.0.1:60035 2023-07-21 05:14:40,573 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:427970x0, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 05:14:40,574 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:42797-0x101864d9f180000 connected 2023-07-21 05:14:40,595 DEBUG [Listener at localhost/40271] zookeeper.ZKUtil(164): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 05:14:40,596 DEBUG [Listener at localhost/40271] zookeeper.ZKUtil(164): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:40,596 DEBUG [Listener at localhost/40271] zookeeper.ZKUtil(164): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 05:14:40,597 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42797 2023-07-21 05:14:40,599 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42797 2023-07-21 05:14:40,600 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42797 2023-07-21 05:14:40,601 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42797 2023-07-21 05:14:40,601 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42797 2023-07-21 05:14:40,604 INFO [Listener at localhost/40271] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 05:14:40,604 INFO [Listener at localhost/40271] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 05:14:40,604 INFO [Listener at localhost/40271] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 05:14:40,605 INFO [Listener at localhost/40271] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 05:14:40,605 INFO [Listener at localhost/40271] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 05:14:40,605 INFO [Listener at localhost/40271] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 05:14:40,605 INFO [Listener at localhost/40271] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 05:14:40,605 INFO [Listener at localhost/40271] http.HttpServer(1146): Jetty bound to port 46635 2023-07-21 05:14:40,606 INFO [Listener at localhost/40271] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 05:14:40,607 INFO [Listener at localhost/40271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:40,608 INFO [Listener at localhost/40271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@60e1aefd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/hadoop.log.dir/,AVAILABLE} 2023-07-21 05:14:40,608 INFO [Listener at localhost/40271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:40,608 INFO [Listener at localhost/40271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@668a6b88{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-21 05:14:40,619 INFO [Listener at localhost/40271] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 05:14:40,620 INFO [Listener at localhost/40271] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 05:14:40,620 INFO [Listener at localhost/40271] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 05:14:40,621 INFO [Listener at localhost/40271] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 05:14:40,623 INFO [Listener at localhost/40271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:40,624 INFO [Listener at localhost/40271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@17183753{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-21 05:14:40,625 INFO [Listener at localhost/40271] server.AbstractConnector(333): Started ServerConnector@140a6448{HTTP/1.1, (http/1.1)}{0.0.0.0:46635} 2023-07-21 05:14:40,625 INFO [Listener at localhost/40271] server.Server(415): Started @37748ms 2023-07-21 05:14:40,626 INFO [Listener at localhost/40271] master.HMaster(444): hbase.rootdir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9, hbase.cluster.distributed=false 2023-07-21 05:14:40,704 INFO [Listener at localhost/40271] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 05:14:40,705 INFO [Listener at localhost/40271] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:40,705 INFO [Listener at localhost/40271] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:40,705 INFO [Listener at localhost/40271] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 05:14:40,705 INFO [Listener at localhost/40271] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:40,705 INFO [Listener at localhost/40271] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 05:14:40,705 INFO [Listener at localhost/40271] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 05:14:40,706 INFO [Listener at localhost/40271] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42737 2023-07-21 05:14:40,706 INFO [Listener at localhost/40271] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 05:14:40,707 DEBUG [Listener at localhost/40271] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 05:14:40,708 INFO [Listener at localhost/40271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:40,709 INFO [Listener at localhost/40271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:40,711 INFO [Listener at localhost/40271] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42737 connecting to ZooKeeper ensemble=127.0.0.1:60035 2023-07-21 05:14:40,714 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:427370x0, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 05:14:40,716 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42737-0x101864d9f180001 connected 2023-07-21 05:14:40,716 DEBUG [Listener at localhost/40271] zookeeper.ZKUtil(164): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 05:14:40,717 DEBUG [Listener at localhost/40271] zookeeper.ZKUtil(164): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:40,718 DEBUG [Listener at localhost/40271] zookeeper.ZKUtil(164): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 05:14:40,718 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42737 2023-07-21 05:14:40,718 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42737 2023-07-21 05:14:40,722 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42737 2023-07-21 05:14:40,723 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42737 2023-07-21 05:14:40,723 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42737 2023-07-21 05:14:40,726 INFO [Listener at localhost/40271] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 05:14:40,726 INFO [Listener at localhost/40271] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 05:14:40,726 INFO [Listener at localhost/40271] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 05:14:40,727 INFO [Listener at localhost/40271] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 05:14:40,727 INFO [Listener at localhost/40271] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 05:14:40,727 INFO [Listener at localhost/40271] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 05:14:40,727 INFO [Listener at localhost/40271] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 05:14:40,729 INFO [Listener at localhost/40271] http.HttpServer(1146): Jetty bound to port 45021 2023-07-21 05:14:40,729 INFO [Listener at localhost/40271] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 05:14:40,734 INFO [Listener at localhost/40271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:40,734 INFO [Listener at localhost/40271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@636fde24{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/hadoop.log.dir/,AVAILABLE} 2023-07-21 05:14:40,735 INFO [Listener at localhost/40271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:40,735 INFO [Listener at localhost/40271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3a647aaf{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-21 05:14:40,743 INFO [Listener at localhost/40271] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 05:14:40,744 INFO [Listener at localhost/40271] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 05:14:40,744 INFO [Listener at localhost/40271] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 05:14:40,744 INFO [Listener at localhost/40271] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 05:14:40,747 INFO [Listener at localhost/40271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:40,748 INFO [Listener at localhost/40271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@35d76b04{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:40,750 INFO [Listener at localhost/40271] server.AbstractConnector(333): Started ServerConnector@57e9a200{HTTP/1.1, (http/1.1)}{0.0.0.0:45021} 2023-07-21 05:14:40,750 INFO [Listener at localhost/40271] server.Server(415): Started @37873ms 2023-07-21 05:14:40,763 INFO [Listener at localhost/40271] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 05:14:40,763 INFO [Listener at localhost/40271] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:40,763 INFO [Listener at localhost/40271] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:40,763 INFO [Listener at localhost/40271] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 05:14:40,763 INFO [Listener at localhost/40271] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:40,763 INFO [Listener at localhost/40271] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 05:14:40,763 INFO [Listener at localhost/40271] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 05:14:40,764 INFO [Listener at localhost/40271] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40459 2023-07-21 05:14:40,764 INFO [Listener at localhost/40271] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 05:14:40,766 DEBUG [Listener at localhost/40271] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 05:14:40,766 INFO [Listener at localhost/40271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:40,767 INFO [Listener at localhost/40271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:40,768 INFO [Listener at localhost/40271] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40459 connecting to ZooKeeper ensemble=127.0.0.1:60035 2023-07-21 05:14:40,772 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:404590x0, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 05:14:40,774 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40459-0x101864d9f180002 connected 2023-07-21 05:14:40,774 DEBUG [Listener at localhost/40271] zookeeper.ZKUtil(164): regionserver:40459-0x101864d9f180002, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 05:14:40,774 DEBUG [Listener at localhost/40271] zookeeper.ZKUtil(164): regionserver:40459-0x101864d9f180002, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:40,775 DEBUG [Listener at localhost/40271] zookeeper.ZKUtil(164): regionserver:40459-0x101864d9f180002, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 05:14:40,778 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40459 2023-07-21 05:14:40,778 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40459 2023-07-21 05:14:40,779 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40459 2023-07-21 05:14:40,779 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40459 2023-07-21 05:14:40,779 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40459 2023-07-21 05:14:40,782 INFO [Listener at localhost/40271] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 05:14:40,782 INFO [Listener at localhost/40271] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 05:14:40,782 INFO [Listener at localhost/40271] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 05:14:40,783 INFO [Listener at localhost/40271] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 05:14:40,783 INFO [Listener at localhost/40271] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 05:14:40,783 INFO [Listener at localhost/40271] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 05:14:40,783 INFO [Listener at localhost/40271] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 05:14:40,784 INFO [Listener at localhost/40271] http.HttpServer(1146): Jetty bound to port 39093 2023-07-21 05:14:40,784 INFO [Listener at localhost/40271] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 05:14:40,789 INFO [Listener at localhost/40271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:40,789 INFO [Listener at localhost/40271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@15b2670b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/hadoop.log.dir/,AVAILABLE} 2023-07-21 05:14:40,789 INFO [Listener at localhost/40271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:40,790 INFO [Listener at localhost/40271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@16387d57{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-21 05:14:40,795 INFO [Listener at localhost/40271] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 05:14:40,796 INFO [Listener at localhost/40271] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 05:14:40,796 INFO [Listener at localhost/40271] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 05:14:40,796 INFO [Listener at localhost/40271] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 05:14:40,797 INFO [Listener at localhost/40271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:40,798 INFO [Listener at localhost/40271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5574719f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:40,800 INFO [Listener at localhost/40271] server.AbstractConnector(333): Started ServerConnector@3fecd666{HTTP/1.1, (http/1.1)}{0.0.0.0:39093} 2023-07-21 05:14:40,800 INFO [Listener at localhost/40271] server.Server(415): Started @37923ms 2023-07-21 05:14:40,817 INFO [Listener at localhost/40271] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 05:14:40,818 INFO [Listener at localhost/40271] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:40,818 INFO [Listener at localhost/40271] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:40,818 INFO [Listener at localhost/40271] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 05:14:40,818 INFO [Listener at localhost/40271] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:40,818 INFO [Listener at localhost/40271] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 05:14:40,819 INFO [Listener at localhost/40271] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 05:14:40,820 INFO [Listener at localhost/40271] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41649 2023-07-21 05:14:40,821 INFO [Listener at localhost/40271] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 05:14:40,822 DEBUG [Listener at localhost/40271] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 05:14:40,823 INFO [Listener at localhost/40271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:40,824 INFO [Listener at localhost/40271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:40,825 INFO [Listener at localhost/40271] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41649 connecting to ZooKeeper ensemble=127.0.0.1:60035 2023-07-21 05:14:40,828 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:416490x0, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 05:14:40,830 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41649-0x101864d9f180003 connected 2023-07-21 05:14:40,830 DEBUG [Listener at localhost/40271] zookeeper.ZKUtil(164): regionserver:41649-0x101864d9f180003, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 05:14:40,830 DEBUG [Listener at localhost/40271] zookeeper.ZKUtil(164): regionserver:41649-0x101864d9f180003, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:40,831 DEBUG [Listener at localhost/40271] zookeeper.ZKUtil(164): regionserver:41649-0x101864d9f180003, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 05:14:40,831 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41649 2023-07-21 05:14:40,832 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41649 2023-07-21 05:14:40,833 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41649 2023-07-21 05:14:40,834 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41649 2023-07-21 05:14:40,838 DEBUG [Listener at localhost/40271] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41649 2023-07-21 05:14:40,840 INFO [Listener at localhost/40271] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 05:14:40,840 INFO [Listener at localhost/40271] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 05:14:40,840 INFO [Listener at localhost/40271] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 05:14:40,841 INFO [Listener at localhost/40271] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 05:14:40,841 INFO [Listener at localhost/40271] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 05:14:40,841 INFO [Listener at localhost/40271] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 05:14:40,841 INFO [Listener at localhost/40271] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 05:14:40,841 INFO [Listener at localhost/40271] http.HttpServer(1146): Jetty bound to port 39129 2023-07-21 05:14:40,842 INFO [Listener at localhost/40271] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 05:14:40,843 INFO [Listener at localhost/40271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:40,843 INFO [Listener at localhost/40271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5c0552ff{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/hadoop.log.dir/,AVAILABLE} 2023-07-21 05:14:40,843 INFO [Listener at localhost/40271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:40,844 INFO [Listener at localhost/40271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@44e67843{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-21 05:14:40,849 INFO [Listener at localhost/40271] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 05:14:40,850 INFO [Listener at localhost/40271] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 05:14:40,850 INFO [Listener at localhost/40271] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 05:14:40,850 INFO [Listener at localhost/40271] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 05:14:40,852 INFO [Listener at localhost/40271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:40,853 INFO [Listener at localhost/40271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2e6aa2b0{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:40,854 INFO [Listener at localhost/40271] server.AbstractConnector(333): Started ServerConnector@7929487c{HTTP/1.1, (http/1.1)}{0.0.0.0:39129} 2023-07-21 05:14:40,854 INFO [Listener at localhost/40271] server.Server(415): Started @37977ms 2023-07-21 05:14:40,857 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 05:14:40,866 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@3bc8cd48{HTTP/1.1, (http/1.1)}{0.0.0.0:34573} 2023-07-21 05:14:40,866 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @37988ms 2023-07-21 05:14:40,866 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,42797,1689916480560 2023-07-21 05:14:40,873 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 05:14:40,874 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,42797,1689916480560 2023-07-21 05:14:40,875 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:40459-0x101864d9f180002, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 05:14:40,875 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 05:14:40,875 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 05:14:40,875 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:41649-0x101864d9f180003, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 05:14:40,876 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:40,877 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 05:14:40,880 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,42797,1689916480560 from backup master directory 2023-07-21 05:14:40,880 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 05:14:40,881 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,42797,1689916480560 2023-07-21 05:14:40,881 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 05:14:40,881 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 05:14:40,881 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,42797,1689916480560 2023-07-21 05:14:40,899 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/hbase.id with ID: 45b7a1ad-4c0b-4d57-84bd-61a66bd97b16 2023-07-21 05:14:40,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:40,914 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:40,928 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x4818754b to 127.0.0.1:60035 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:40,935 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@51bf1f95, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:40,935 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:40,936 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 05:14:40,936 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:40,938 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/MasterData/data/master/store-tmp 2023-07-21 05:14:40,948 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:40,948 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 05:14:40,948 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:40,948 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:40,948 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 05:14:40,948 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:40,948 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:40,948 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 05:14:40,949 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/MasterData/WALs/jenkins-hbase4.apache.org,42797,1689916480560 2023-07-21 05:14:40,952 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42797%2C1689916480560, suffix=, logDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/MasterData/WALs/jenkins-hbase4.apache.org,42797,1689916480560, archiveDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/MasterData/oldWALs, maxLogs=10 2023-07-21 05:14:40,971 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33595,DS-fa7c24f2-afc7-4c32-82f0-f5c9db20c685,DISK] 2023-07-21 05:14:40,972 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45331,DS-2e84e184-abf4-437d-be8d-c1982675d7bb,DISK] 2023-07-21 05:14:40,972 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38827,DS-4190f524-e513-4e34-96e6-0bfb064620ec,DISK] 2023-07-21 05:14:40,980 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/MasterData/WALs/jenkins-hbase4.apache.org,42797,1689916480560/jenkins-hbase4.apache.org%2C42797%2C1689916480560.1689916480952 2023-07-21 05:14:40,980 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33595,DS-fa7c24f2-afc7-4c32-82f0-f5c9db20c685,DISK], DatanodeInfoWithStorage[127.0.0.1:38827,DS-4190f524-e513-4e34-96e6-0bfb064620ec,DISK], DatanodeInfoWithStorage[127.0.0.1:45331,DS-2e84e184-abf4-437d-be8d-c1982675d7bb,DISK]] 2023-07-21 05:14:40,980 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:40,980 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:40,980 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:40,981 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:40,983 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:40,985 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 05:14:40,986 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 05:14:40,986 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:40,987 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:40,987 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:40,990 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:40,992 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:40,992 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11321483200, jitterRate=0.05439528822898865}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:40,993 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 05:14:40,993 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 05:14:40,994 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 05:14:40,994 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 05:14:40,994 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 05:14:40,995 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-21 05:14:40,995 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-21 05:14:40,995 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 05:14:41,003 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 05:14:41,004 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 05:14:41,005 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 05:14:41,005 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 05:14:41,005 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 05:14:41,007 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:41,007 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 05:14:41,007 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 05:14:41,008 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 05:14:41,009 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:40459-0x101864d9f180002, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:41,009 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:41649-0x101864d9f180003, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:41,010 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:41,010 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:41,009 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:41,010 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,42797,1689916480560, sessionid=0x101864d9f180000, setting cluster-up flag (Was=false) 2023-07-21 05:14:41,016 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:41,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 05:14:41,021 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42797,1689916480560 2023-07-21 05:14:41,025 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:41,029 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 05:14:41,030 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42797,1689916480560 2023-07-21 05:14:41,031 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.hbase-snapshot/.tmp 2023-07-21 05:14:41,034 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 05:14:41,034 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 05:14:41,034 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 05:14:41,035 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42797,1689916480560] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 05:14:41,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 05:14:41,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-21 05:14:41,037 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 05:14:41,048 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 05:14:41,048 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 05:14:41,048 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 05:14:41,048 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 05:14:41,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 05:14:41,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 05:14:41,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 05:14:41,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 05:14:41,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-21 05:14:41,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 05:14:41,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689916511053 2023-07-21 05:14:41,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 05:14:41,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 05:14:41,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 05:14:41,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 05:14:41,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 05:14:41,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 05:14:41,072 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,074 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 05:14:41,076 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 05:14:41,076 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 05:14:41,076 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 05:14:41,076 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 05:14:41,077 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 05:14:41,077 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 05:14:41,077 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689916481077,5,FailOnTimeoutGroup] 2023-07-21 05:14:41,077 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689916481077,5,FailOnTimeoutGroup] 2023-07-21 05:14:41,077 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,077 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 05:14:41,077 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,077 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,077 INFO [RS:2;jenkins-hbase4:41649] regionserver.HRegionServer(951): ClusterId : 45b7a1ad-4c0b-4d57-84bd-61a66bd97b16 2023-07-21 05:14:41,077 INFO [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer(951): ClusterId : 45b7a1ad-4c0b-4d57-84bd-61a66bd97b16 2023-07-21 05:14:41,079 INFO [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer(951): ClusterId : 45b7a1ad-4c0b-4d57-84bd-61a66bd97b16 2023-07-21 05:14:41,078 DEBUG [RS:2;jenkins-hbase4:41649] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 05:14:41,080 DEBUG [RS:1;jenkins-hbase4:40459] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 05:14:41,080 DEBUG [RS:0;jenkins-hbase4:42737] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 05:14:41,080 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:41,082 DEBUG [RS:1;jenkins-hbase4:40459] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 05:14:41,082 DEBUG [RS:1;jenkins-hbase4:40459] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 05:14:41,082 DEBUG [RS:2;jenkins-hbase4:41649] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 05:14:41,082 DEBUG [RS:2;jenkins-hbase4:41649] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 05:14:41,083 DEBUG [RS:0;jenkins-hbase4:42737] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 05:14:41,083 DEBUG [RS:0;jenkins-hbase4:42737] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 05:14:41,085 DEBUG [RS:1;jenkins-hbase4:40459] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 05:14:41,087 DEBUG [RS:0;jenkins-hbase4:42737] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 05:14:41,088 DEBUG [RS:1;jenkins-hbase4:40459] zookeeper.ReadOnlyZKClient(139): Connect 0x08173bda to 127.0.0.1:60035 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:41,088 DEBUG [RS:2;jenkins-hbase4:41649] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 05:14:41,088 DEBUG [RS:0;jenkins-hbase4:42737] zookeeper.ReadOnlyZKClient(139): Connect 0x5dbe7c66 to 127.0.0.1:60035 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:41,089 DEBUG [RS:2;jenkins-hbase4:41649] zookeeper.ReadOnlyZKClient(139): Connect 0x61f10f78 to 127.0.0.1:60035 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:41,099 DEBUG [RS:1;jenkins-hbase4:40459] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d0c4a8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:41,099 DEBUG [RS:1;jenkins-hbase4:40459] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2ef50aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 05:14:41,102 DEBUG [RS:2;jenkins-hbase4:41649] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6b096408, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:41,102 DEBUG [RS:2;jenkins-hbase4:41649] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e0be70d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 05:14:41,103 DEBUG [RS:0;jenkins-hbase4:42737] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@537989a1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:41,103 DEBUG [RS:0;jenkins-hbase4:42737] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@57461716, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 05:14:41,112 DEBUG [RS:2;jenkins-hbase4:41649] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:41649 2023-07-21 05:14:41,112 INFO [RS:2;jenkins-hbase4:41649] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 05:14:41,112 INFO [RS:2;jenkins-hbase4:41649] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 05:14:41,112 DEBUG [RS:2;jenkins-hbase4:41649] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 05:14:41,113 DEBUG [RS:1;jenkins-hbase4:40459] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:40459 2023-07-21 05:14:41,113 INFO [RS:2;jenkins-hbase4:41649] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42797,1689916480560 with isa=jenkins-hbase4.apache.org/172.31.14.131:41649, startcode=1689916480817 2023-07-21 05:14:41,113 INFO [RS:1;jenkins-hbase4:40459] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 05:14:41,113 INFO [RS:1;jenkins-hbase4:40459] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 05:14:41,113 DEBUG [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 05:14:41,113 DEBUG [RS:2;jenkins-hbase4:41649] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 05:14:41,114 INFO [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42797,1689916480560 with isa=jenkins-hbase4.apache.org/172.31.14.131:40459, startcode=1689916480762 2023-07-21 05:14:41,114 DEBUG [RS:1;jenkins-hbase4:40459] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 05:14:41,115 DEBUG [RS:0;jenkins-hbase4:42737] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:42737 2023-07-21 05:14:41,115 INFO [RS:0;jenkins-hbase4:42737] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 05:14:41,115 INFO [RS:0;jenkins-hbase4:42737] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 05:14:41,115 DEBUG [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 05:14:41,115 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52177, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 05:14:41,115 INFO [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42797,1689916480560 with isa=jenkins-hbase4.apache.org/172.31.14.131:42737, startcode=1689916480704 2023-07-21 05:14:41,115 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52295, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 05:14:41,116 DEBUG [RS:0;jenkins-hbase4:42737] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 05:14:41,117 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42797] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41649,1689916480817 2023-07-21 05:14:41,117 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42797,1689916480560] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 05:14:41,118 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42797,1689916480560] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 05:14:41,118 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42797] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40459,1689916480762 2023-07-21 05:14:41,118 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42797,1689916480560] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 05:14:41,118 DEBUG [RS:2;jenkins-hbase4:41649] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9 2023-07-21 05:14:41,118 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42797,1689916480560] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 05:14:41,118 DEBUG [RS:2;jenkins-hbase4:41649] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37015 2023-07-21 05:14:41,118 DEBUG [RS:2;jenkins-hbase4:41649] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46635 2023-07-21 05:14:41,118 DEBUG [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9 2023-07-21 05:14:41,118 DEBUG [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37015 2023-07-21 05:14:41,118 DEBUG [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46635 2023-07-21 05:14:41,118 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47053, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 05:14:41,119 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42797] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42737,1689916480704 2023-07-21 05:14:41,119 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42797,1689916480560] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 05:14:41,119 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42797,1689916480560] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 05:14:41,119 DEBUG [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9 2023-07-21 05:14:41,119 DEBUG [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37015 2023-07-21 05:14:41,119 DEBUG [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46635 2023-07-21 05:14:41,123 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:41,124 DEBUG [RS:1;jenkins-hbase4:40459] zookeeper.ZKUtil(162): regionserver:40459-0x101864d9f180002, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40459,1689916480762 2023-07-21 05:14:41,124 DEBUG [RS:2;jenkins-hbase4:41649] zookeeper.ZKUtil(162): regionserver:41649-0x101864d9f180003, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41649,1689916480817 2023-07-21 05:14:41,124 WARN [RS:1;jenkins-hbase4:40459] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 05:14:41,124 WARN [RS:2;jenkins-hbase4:41649] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 05:14:41,125 INFO [RS:1;jenkins-hbase4:40459] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:41,125 INFO [RS:2;jenkins-hbase4:41649] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:41,125 DEBUG [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/WALs/jenkins-hbase4.apache.org,40459,1689916480762 2023-07-21 05:14:41,125 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40459,1689916480762] 2023-07-21 05:14:41,125 DEBUG [RS:2;jenkins-hbase4:41649] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/WALs/jenkins-hbase4.apache.org,41649,1689916480817 2023-07-21 05:14:41,125 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42737,1689916480704] 2023-07-21 05:14:41,125 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41649,1689916480817] 2023-07-21 05:14:41,125 DEBUG [RS:0;jenkins-hbase4:42737] zookeeper.ZKUtil(162): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42737,1689916480704 2023-07-21 05:14:41,125 WARN [RS:0;jenkins-hbase4:42737] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 05:14:41,125 INFO [RS:0;jenkins-hbase4:42737] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:41,125 DEBUG [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/WALs/jenkins-hbase4.apache.org,42737,1689916480704 2023-07-21 05:14:41,132 DEBUG [RS:1;jenkins-hbase4:40459] zookeeper.ZKUtil(162): regionserver:40459-0x101864d9f180002, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40459,1689916480762 2023-07-21 05:14:41,132 DEBUG [RS:2;jenkins-hbase4:41649] zookeeper.ZKUtil(162): regionserver:41649-0x101864d9f180003, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40459,1689916480762 2023-07-21 05:14:41,132 DEBUG [RS:0;jenkins-hbase4:42737] zookeeper.ZKUtil(162): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40459,1689916480762 2023-07-21 05:14:41,132 DEBUG [RS:1;jenkins-hbase4:40459] zookeeper.ZKUtil(162): regionserver:40459-0x101864d9f180002, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42737,1689916480704 2023-07-21 05:14:41,132 DEBUG [RS:2;jenkins-hbase4:41649] zookeeper.ZKUtil(162): regionserver:41649-0x101864d9f180003, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42737,1689916480704 2023-07-21 05:14:41,132 DEBUG [RS:0;jenkins-hbase4:42737] zookeeper.ZKUtil(162): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42737,1689916480704 2023-07-21 05:14:41,133 DEBUG [RS:2;jenkins-hbase4:41649] zookeeper.ZKUtil(162): regionserver:41649-0x101864d9f180003, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41649,1689916480817 2023-07-21 05:14:41,133 DEBUG [RS:1;jenkins-hbase4:40459] zookeeper.ZKUtil(162): regionserver:40459-0x101864d9f180002, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41649,1689916480817 2023-07-21 05:14:41,134 DEBUG [RS:0;jenkins-hbase4:42737] zookeeper.ZKUtil(162): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41649,1689916480817 2023-07-21 05:14:41,135 DEBUG [RS:2;jenkins-hbase4:41649] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 05:14:41,135 DEBUG [RS:0;jenkins-hbase4:42737] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 05:14:41,135 INFO [RS:2;jenkins-hbase4:41649] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 05:14:41,135 INFO [RS:0;jenkins-hbase4:42737] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 05:14:41,135 DEBUG [RS:1;jenkins-hbase4:40459] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 05:14:41,136 INFO [RS:1;jenkins-hbase4:40459] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 05:14:41,136 INFO [RS:2;jenkins-hbase4:41649] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 05:14:41,139 INFO [RS:2;jenkins-hbase4:41649] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 05:14:41,139 INFO [RS:1;jenkins-hbase4:40459] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 05:14:41,139 INFO [RS:2;jenkins-hbase4:41649] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,139 INFO [RS:0;jenkins-hbase4:42737] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 05:14:41,139 INFO [RS:1;jenkins-hbase4:40459] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 05:14:41,139 INFO [RS:1;jenkins-hbase4:40459] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,139 INFO [RS:2;jenkins-hbase4:41649] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 05:14:41,140 INFO [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 05:14:41,140 INFO [RS:0;jenkins-hbase4:42737] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 05:14:41,140 INFO [RS:0;jenkins-hbase4:42737] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,142 INFO [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 05:14:41,143 INFO [RS:2;jenkins-hbase4:41649] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,144 DEBUG [RS:2;jenkins-hbase4:41649] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,144 DEBUG [RS:2;jenkins-hbase4:41649] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,144 DEBUG [RS:2;jenkins-hbase4:41649] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,144 DEBUG [RS:2;jenkins-hbase4:41649] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,144 DEBUG [RS:2;jenkins-hbase4:41649] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,144 DEBUG [RS:2;jenkins-hbase4:41649] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 05:14:41,145 DEBUG [RS:2;jenkins-hbase4:41649] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,145 DEBUG [RS:2;jenkins-hbase4:41649] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,145 DEBUG [RS:2;jenkins-hbase4:41649] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,145 DEBUG [RS:2;jenkins-hbase4:41649] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,148 INFO [RS:0;jenkins-hbase4:42737] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,148 INFO [RS:1;jenkins-hbase4:40459] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,148 DEBUG [RS:1;jenkins-hbase4:40459] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,148 INFO [RS:2;jenkins-hbase4:41649] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,148 DEBUG [RS:1;jenkins-hbase4:40459] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,149 DEBUG [RS:0;jenkins-hbase4:42737] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,149 INFO [RS:2;jenkins-hbase4:41649] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,150 DEBUG [RS:0;jenkins-hbase4:42737] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,150 INFO [RS:2;jenkins-hbase4:41649] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,150 DEBUG [RS:1;jenkins-hbase4:40459] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,150 INFO [RS:2;jenkins-hbase4:41649] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,150 DEBUG [RS:1;jenkins-hbase4:40459] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,150 DEBUG [RS:0;jenkins-hbase4:42737] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,150 DEBUG [RS:1;jenkins-hbase4:40459] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,150 DEBUG [RS:0;jenkins-hbase4:42737] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,150 DEBUG [RS:1;jenkins-hbase4:40459] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 05:14:41,150 DEBUG [RS:0;jenkins-hbase4:42737] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,150 DEBUG [RS:1;jenkins-hbase4:40459] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,150 DEBUG [RS:0;jenkins-hbase4:42737] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 05:14:41,150 DEBUG [RS:1;jenkins-hbase4:40459] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,150 DEBUG [RS:0;jenkins-hbase4:42737] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,150 DEBUG [RS:1;jenkins-hbase4:40459] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,150 DEBUG [RS:0;jenkins-hbase4:42737] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,150 DEBUG [RS:1;jenkins-hbase4:40459] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,151 DEBUG [RS:0;jenkins-hbase4:42737] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,151 DEBUG [RS:0;jenkins-hbase4:42737] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:41,159 INFO [RS:1;jenkins-hbase4:40459] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,159 INFO [RS:1;jenkins-hbase4:40459] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,159 INFO [RS:1;jenkins-hbase4:40459] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,159 INFO [RS:1;jenkins-hbase4:40459] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,162 INFO [RS:0;jenkins-hbase4:42737] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,162 INFO [RS:0;jenkins-hbase4:42737] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,162 INFO [RS:0;jenkins-hbase4:42737] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,162 INFO [RS:0;jenkins-hbase4:42737] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,164 INFO [RS:2;jenkins-hbase4:41649] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 05:14:41,165 INFO [RS:2;jenkins-hbase4:41649] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41649,1689916480817-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,174 INFO [RS:1;jenkins-hbase4:40459] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 05:14:41,174 INFO [RS:1;jenkins-hbase4:40459] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40459,1689916480762-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,178 INFO [RS:0;jenkins-hbase4:42737] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 05:14:41,178 INFO [RS:0;jenkins-hbase4:42737] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42737,1689916480704-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,181 INFO [RS:2;jenkins-hbase4:41649] regionserver.Replication(203): jenkins-hbase4.apache.org,41649,1689916480817 started 2023-07-21 05:14:41,181 INFO [RS:2;jenkins-hbase4:41649] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41649,1689916480817, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41649, sessionid=0x101864d9f180003 2023-07-21 05:14:41,181 DEBUG [RS:2;jenkins-hbase4:41649] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 05:14:41,181 DEBUG [RS:2;jenkins-hbase4:41649] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41649,1689916480817 2023-07-21 05:14:41,181 DEBUG [RS:2;jenkins-hbase4:41649] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41649,1689916480817' 2023-07-21 05:14:41,181 DEBUG [RS:2;jenkins-hbase4:41649] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 05:14:41,181 DEBUG [RS:2;jenkins-hbase4:41649] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 05:14:41,182 DEBUG [RS:2;jenkins-hbase4:41649] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 05:14:41,182 DEBUG [RS:2;jenkins-hbase4:41649] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 05:14:41,182 DEBUG [RS:2;jenkins-hbase4:41649] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41649,1689916480817 2023-07-21 05:14:41,182 DEBUG [RS:2;jenkins-hbase4:41649] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41649,1689916480817' 2023-07-21 05:14:41,182 DEBUG [RS:2;jenkins-hbase4:41649] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 05:14:41,182 DEBUG [RS:2;jenkins-hbase4:41649] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 05:14:41,183 DEBUG [RS:2;jenkins-hbase4:41649] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 05:14:41,183 INFO [RS:2;jenkins-hbase4:41649] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 05:14:41,185 INFO [RS:1;jenkins-hbase4:40459] regionserver.Replication(203): jenkins-hbase4.apache.org,40459,1689916480762 started 2023-07-21 05:14:41,185 INFO [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40459,1689916480762, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40459, sessionid=0x101864d9f180002 2023-07-21 05:14:41,185 DEBUG [RS:1;jenkins-hbase4:40459] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 05:14:41,185 DEBUG [RS:1;jenkins-hbase4:40459] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40459,1689916480762 2023-07-21 05:14:41,185 DEBUG [RS:1;jenkins-hbase4:40459] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40459,1689916480762' 2023-07-21 05:14:41,185 DEBUG [RS:1;jenkins-hbase4:40459] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 05:14:41,185 DEBUG [RS:1;jenkins-hbase4:40459] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 05:14:41,185 INFO [RS:2;jenkins-hbase4:41649] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,186 DEBUG [RS:1;jenkins-hbase4:40459] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 05:14:41,186 DEBUG [RS:1;jenkins-hbase4:40459] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 05:14:41,186 DEBUG [RS:1;jenkins-hbase4:40459] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40459,1689916480762 2023-07-21 05:14:41,186 DEBUG [RS:1;jenkins-hbase4:40459] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40459,1689916480762' 2023-07-21 05:14:41,186 DEBUG [RS:1;jenkins-hbase4:40459] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 05:14:41,186 DEBUG [RS:2;jenkins-hbase4:41649] zookeeper.ZKUtil(398): regionserver:41649-0x101864d9f180003, quorum=127.0.0.1:60035, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 05:14:41,186 INFO [RS:2;jenkins-hbase4:41649] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 05:14:41,186 DEBUG [RS:1;jenkins-hbase4:40459] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 05:14:41,186 DEBUG [RS:1;jenkins-hbase4:40459] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 05:14:41,186 INFO [RS:1;jenkins-hbase4:40459] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 05:14:41,186 INFO [RS:1;jenkins-hbase4:40459] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,187 INFO [RS:2;jenkins-hbase4:41649] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,187 INFO [RS:2;jenkins-hbase4:41649] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,187 DEBUG [RS:1;jenkins-hbase4:40459] zookeeper.ZKUtil(398): regionserver:40459-0x101864d9f180002, quorum=127.0.0.1:60035, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 05:14:41,187 INFO [RS:1;jenkins-hbase4:40459] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 05:14:41,187 INFO [RS:1;jenkins-hbase4:40459] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,187 INFO [RS:1;jenkins-hbase4:40459] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,189 INFO [RS:0;jenkins-hbase4:42737] regionserver.Replication(203): jenkins-hbase4.apache.org,42737,1689916480704 started 2023-07-21 05:14:41,190 INFO [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42737,1689916480704, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42737, sessionid=0x101864d9f180001 2023-07-21 05:14:41,192 DEBUG [RS:0;jenkins-hbase4:42737] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 05:14:41,192 DEBUG [RS:0;jenkins-hbase4:42737] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42737,1689916480704 2023-07-21 05:14:41,192 DEBUG [RS:0;jenkins-hbase4:42737] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42737,1689916480704' 2023-07-21 05:14:41,192 DEBUG [RS:0;jenkins-hbase4:42737] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 05:14:41,195 DEBUG [RS:0;jenkins-hbase4:42737] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 05:14:41,196 DEBUG [RS:0;jenkins-hbase4:42737] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 05:14:41,196 DEBUG [RS:0;jenkins-hbase4:42737] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 05:14:41,196 DEBUG [RS:0;jenkins-hbase4:42737] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42737,1689916480704 2023-07-21 05:14:41,196 DEBUG [RS:0;jenkins-hbase4:42737] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42737,1689916480704' 2023-07-21 05:14:41,196 DEBUG [RS:0;jenkins-hbase4:42737] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 05:14:41,197 DEBUG [RS:0;jenkins-hbase4:42737] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 05:14:41,198 DEBUG [RS:0;jenkins-hbase4:42737] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 05:14:41,198 INFO [RS:0;jenkins-hbase4:42737] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 05:14:41,198 INFO [RS:0;jenkins-hbase4:42737] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,198 DEBUG [RS:0;jenkins-hbase4:42737] zookeeper.ZKUtil(398): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 05:14:41,198 INFO [RS:0;jenkins-hbase4:42737] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 05:14:41,198 INFO [RS:0;jenkins-hbase4:42737] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,198 INFO [RS:0;jenkins-hbase4:42737] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,292 INFO [RS:2;jenkins-hbase4:41649] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41649%2C1689916480817, suffix=, logDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/WALs/jenkins-hbase4.apache.org,41649,1689916480817, archiveDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/oldWALs, maxLogs=32 2023-07-21 05:14:41,296 INFO [RS:1;jenkins-hbase4:40459] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40459%2C1689916480762, suffix=, logDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/WALs/jenkins-hbase4.apache.org,40459,1689916480762, archiveDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/oldWALs, maxLogs=32 2023-07-21 05:14:41,312 INFO [RS:0;jenkins-hbase4:42737] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42737%2C1689916480704, suffix=, logDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/WALs/jenkins-hbase4.apache.org,42737,1689916480704, archiveDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/oldWALs, maxLogs=32 2023-07-21 05:14:41,334 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45331,DS-2e84e184-abf4-437d-be8d-c1982675d7bb,DISK] 2023-07-21 05:14:41,335 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33595,DS-fa7c24f2-afc7-4c32-82f0-f5c9db20c685,DISK] 2023-07-21 05:14:41,336 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38827,DS-4190f524-e513-4e34-96e6-0bfb064620ec,DISK] 2023-07-21 05:14:41,350 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45331,DS-2e84e184-abf4-437d-be8d-c1982675d7bb,DISK] 2023-07-21 05:14:41,350 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33595,DS-fa7c24f2-afc7-4c32-82f0-f5c9db20c685,DISK] 2023-07-21 05:14:41,351 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38827,DS-4190f524-e513-4e34-96e6-0bfb064620ec,DISK] 2023-07-21 05:14:41,353 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33595,DS-fa7c24f2-afc7-4c32-82f0-f5c9db20c685,DISK] 2023-07-21 05:14:41,353 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45331,DS-2e84e184-abf4-437d-be8d-c1982675d7bb,DISK] 2023-07-21 05:14:41,353 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38827,DS-4190f524-e513-4e34-96e6-0bfb064620ec,DISK] 2023-07-21 05:14:41,357 INFO [RS:2;jenkins-hbase4:41649] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/WALs/jenkins-hbase4.apache.org,41649,1689916480817/jenkins-hbase4.apache.org%2C41649%2C1689916480817.1689916481294 2023-07-21 05:14:41,359 INFO [RS:0;jenkins-hbase4:42737] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/WALs/jenkins-hbase4.apache.org,42737,1689916480704/jenkins-hbase4.apache.org%2C42737%2C1689916480704.1689916481313 2023-07-21 05:14:41,363 DEBUG [RS:2;jenkins-hbase4:41649] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33595,DS-fa7c24f2-afc7-4c32-82f0-f5c9db20c685,DISK], DatanodeInfoWithStorage[127.0.0.1:38827,DS-4190f524-e513-4e34-96e6-0bfb064620ec,DISK], DatanodeInfoWithStorage[127.0.0.1:45331,DS-2e84e184-abf4-437d-be8d-c1982675d7bb,DISK]] 2023-07-21 05:14:41,363 INFO [RS:1;jenkins-hbase4:40459] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/WALs/jenkins-hbase4.apache.org,40459,1689916480762/jenkins-hbase4.apache.org%2C40459%2C1689916480762.1689916481297 2023-07-21 05:14:41,363 DEBUG [RS:0;jenkins-hbase4:42737] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33595,DS-fa7c24f2-afc7-4c32-82f0-f5c9db20c685,DISK], DatanodeInfoWithStorage[127.0.0.1:38827,DS-4190f524-e513-4e34-96e6-0bfb064620ec,DISK], DatanodeInfoWithStorage[127.0.0.1:45331,DS-2e84e184-abf4-437d-be8d-c1982675d7bb,DISK]] 2023-07-21 05:14:41,370 DEBUG [RS:1;jenkins-hbase4:40459] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45331,DS-2e84e184-abf4-437d-be8d-c1982675d7bb,DISK], DatanodeInfoWithStorage[127.0.0.1:38827,DS-4190f524-e513-4e34-96e6-0bfb064620ec,DISK], DatanodeInfoWithStorage[127.0.0.1:33595,DS-fa7c24f2-afc7-4c32-82f0-f5c9db20c685,DISK]] 2023-07-21 05:14:41,511 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:41,511 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:41,512 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9 2023-07-21 05:14:41,528 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:41,530 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 05:14:41,532 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/info 2023-07-21 05:14:41,533 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 05:14:41,535 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:41,535 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 05:14:41,536 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/rep_barrier 2023-07-21 05:14:41,537 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 05:14:41,537 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:41,537 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 05:14:41,539 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/table 2023-07-21 05:14:41,539 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 05:14:41,540 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:41,541 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740 2023-07-21 05:14:41,541 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740 2023-07-21 05:14:41,544 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 05:14:41,546 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 05:14:41,549 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:41,549 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10257839200, jitterRate=-0.044664278626441956}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 05:14:41,550 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 05:14:41,550 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 05:14:41,550 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 05:14:41,550 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 05:14:41,550 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 05:14:41,550 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 05:14:41,550 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 05:14:41,550 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 05:14:41,551 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 05:14:41,551 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 05:14:41,551 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 05:14:41,552 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 05:14:41,554 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 05:14:41,704 DEBUG [jenkins-hbase4:42797] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 05:14:41,705 DEBUG [jenkins-hbase4:42797] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:41,705 DEBUG [jenkins-hbase4:42797] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:41,705 DEBUG [jenkins-hbase4:42797] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:41,705 DEBUG [jenkins-hbase4:42797] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:41,705 DEBUG [jenkins-hbase4:42797] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:41,706 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42737,1689916480704, state=OPENING 2023-07-21 05:14:41,707 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 05:14:41,709 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:41,709 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 05:14:41,712 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42737,1689916480704}] 2023-07-21 05:14:41,866 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42737,1689916480704 2023-07-21 05:14:41,867 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 05:14:41,868 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51926, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 05:14:41,873 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 05:14:41,873 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:41,875 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42737%2C1689916480704.meta, suffix=.meta, logDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/WALs/jenkins-hbase4.apache.org,42737,1689916480704, archiveDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/oldWALs, maxLogs=32 2023-07-21 05:14:41,891 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38827,DS-4190f524-e513-4e34-96e6-0bfb064620ec,DISK] 2023-07-21 05:14:41,891 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45331,DS-2e84e184-abf4-437d-be8d-c1982675d7bb,DISK] 2023-07-21 05:14:41,891 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33595,DS-fa7c24f2-afc7-4c32-82f0-f5c9db20c685,DISK] 2023-07-21 05:14:41,894 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/WALs/jenkins-hbase4.apache.org,42737,1689916480704/jenkins-hbase4.apache.org%2C42737%2C1689916480704.meta.1689916481876.meta 2023-07-21 05:14:41,894 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38827,DS-4190f524-e513-4e34-96e6-0bfb064620ec,DISK], DatanodeInfoWithStorage[127.0.0.1:45331,DS-2e84e184-abf4-437d-be8d-c1982675d7bb,DISK], DatanodeInfoWithStorage[127.0.0.1:33595,DS-fa7c24f2-afc7-4c32-82f0-f5c9db20c685,DISK]] 2023-07-21 05:14:41,894 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:41,895 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 05:14:41,895 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 05:14:41,895 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 05:14:41,895 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 05:14:41,895 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:41,895 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 05:14:41,895 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 05:14:41,897 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 05:14:41,898 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/info 2023-07-21 05:14:41,898 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/info 2023-07-21 05:14:41,898 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 05:14:41,899 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:41,899 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 05:14:41,900 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/rep_barrier 2023-07-21 05:14:41,900 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/rep_barrier 2023-07-21 05:14:41,900 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 05:14:41,901 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:41,901 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 05:14:41,902 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/table 2023-07-21 05:14:41,902 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/table 2023-07-21 05:14:41,902 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 05:14:41,903 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:41,904 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740 2023-07-21 05:14:41,905 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740 2023-07-21 05:14:41,907 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 05:14:41,909 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 05:14:41,910 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10542297760, jitterRate=-0.0181720107793808}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 05:14:41,910 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 05:14:41,911 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689916481866 2023-07-21 05:14:41,916 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 05:14:41,916 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 05:14:41,917 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42737,1689916480704, state=OPEN 2023-07-21 05:14:41,918 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 05:14:41,918 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 05:14:41,920 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 05:14:41,920 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42737,1689916480704 in 209 msec 2023-07-21 05:14:41,921 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 05:14:41,921 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 369 msec 2023-07-21 05:14:41,923 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 886 msec 2023-07-21 05:14:41,923 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689916481923, completionTime=-1 2023-07-21 05:14:41,923 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 05:14:41,923 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 05:14:41,926 DEBUG [hconnection-0x11dcc035-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 05:14:41,928 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51930, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 05:14:41,929 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 05:14:41,929 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689916541929 2023-07-21 05:14:41,929 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689916601929 2023-07-21 05:14:41,930 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-07-21 05:14:41,936 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42797,1689916480560-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,936 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42797,1689916480560-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,936 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42797,1689916480560-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,936 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:42797, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,936 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:41,936 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 05:14:41,937 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:41,938 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 05:14:41,939 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 05:14:41,941 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:41,942 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 05:14:41,944 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/hbase/namespace/7aed03d9c062ca1354b3d31152795831 2023-07-21 05:14:41,944 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/hbase/namespace/7aed03d9c062ca1354b3d31152795831 empty. 2023-07-21 05:14:41,945 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/hbase/namespace/7aed03d9c062ca1354b3d31152795831 2023-07-21 05:14:41,945 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 05:14:41,957 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:41,958 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7aed03d9c062ca1354b3d31152795831, NAME => 'hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp 2023-07-21 05:14:41,967 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:41,967 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 7aed03d9c062ca1354b3d31152795831, disabling compactions & flushes 2023-07-21 05:14:41,967 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831. 2023-07-21 05:14:41,967 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831. 2023-07-21 05:14:41,967 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831. after waiting 0 ms 2023-07-21 05:14:41,967 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831. 2023-07-21 05:14:41,967 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831. 2023-07-21 05:14:41,967 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 7aed03d9c062ca1354b3d31152795831: 2023-07-21 05:14:41,970 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 05:14:41,970 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689916481970"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916481970"}]},"ts":"1689916481970"} 2023-07-21 05:14:41,973 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 05:14:41,974 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 05:14:41,974 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916481974"}]},"ts":"1689916481974"} 2023-07-21 05:14:41,975 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 05:14:41,978 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:41,978 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:41,978 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:41,978 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:41,978 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:41,978 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7aed03d9c062ca1354b3d31152795831, ASSIGN}] 2023-07-21 05:14:41,980 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7aed03d9c062ca1354b3d31152795831, ASSIGN 2023-07-21 05:14:41,981 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=7aed03d9c062ca1354b3d31152795831, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40459,1689916480762; forceNewPlan=false, retain=false 2023-07-21 05:14:42,131 INFO [jenkins-hbase4:42797] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 05:14:42,133 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=7aed03d9c062ca1354b3d31152795831, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40459,1689916480762 2023-07-21 05:14:42,133 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689916482132"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916482132"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916482132"}]},"ts":"1689916482132"} 2023-07-21 05:14:42,134 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 7aed03d9c062ca1354b3d31152795831, server=jenkins-hbase4.apache.org,40459,1689916480762}] 2023-07-21 05:14:42,152 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42797,1689916480560] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:42,153 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42797,1689916480560] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 05:14:42,155 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:42,156 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 05:14:42,157 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/hbase/rsgroup/7e1bc2391878d842697e86d0ef8d60f2 2023-07-21 05:14:42,158 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/hbase/rsgroup/7e1bc2391878d842697e86d0ef8d60f2 empty. 2023-07-21 05:14:42,158 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/hbase/rsgroup/7e1bc2391878d842697e86d0ef8d60f2 2023-07-21 05:14:42,158 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 05:14:42,171 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:42,172 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7e1bc2391878d842697e86d0ef8d60f2, NAME => 'hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp 2023-07-21 05:14:42,181 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:42,181 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 7e1bc2391878d842697e86d0ef8d60f2, disabling compactions & flushes 2023-07-21 05:14:42,181 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2. 2023-07-21 05:14:42,181 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2. 2023-07-21 05:14:42,181 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2. after waiting 0 ms 2023-07-21 05:14:42,182 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2. 2023-07-21 05:14:42,182 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2. 2023-07-21 05:14:42,182 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 7e1bc2391878d842697e86d0ef8d60f2: 2023-07-21 05:14:42,184 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 05:14:42,185 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689916482185"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916482185"}]},"ts":"1689916482185"} 2023-07-21 05:14:42,186 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 05:14:42,187 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 05:14:42,187 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916482187"}]},"ts":"1689916482187"} 2023-07-21 05:14:42,188 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 05:14:42,192 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:42,193 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:42,193 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:42,193 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:42,193 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:42,193 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=7e1bc2391878d842697e86d0ef8d60f2, ASSIGN}] 2023-07-21 05:14:42,194 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=7e1bc2391878d842697e86d0ef8d60f2, ASSIGN 2023-07-21 05:14:42,194 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=7e1bc2391878d842697e86d0ef8d60f2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42737,1689916480704; forceNewPlan=false, retain=false 2023-07-21 05:14:42,287 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40459,1689916480762 2023-07-21 05:14:42,287 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 05:14:42,289 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44950, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 05:14:42,294 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831. 2023-07-21 05:14:42,294 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7aed03d9c062ca1354b3d31152795831, NAME => 'hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:42,295 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 7aed03d9c062ca1354b3d31152795831 2023-07-21 05:14:42,295 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:42,295 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7aed03d9c062ca1354b3d31152795831 2023-07-21 05:14:42,295 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7aed03d9c062ca1354b3d31152795831 2023-07-21 05:14:42,296 INFO [StoreOpener-7aed03d9c062ca1354b3d31152795831-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 7aed03d9c062ca1354b3d31152795831 2023-07-21 05:14:42,298 DEBUG [StoreOpener-7aed03d9c062ca1354b3d31152795831-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/namespace/7aed03d9c062ca1354b3d31152795831/info 2023-07-21 05:14:42,298 DEBUG [StoreOpener-7aed03d9c062ca1354b3d31152795831-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/namespace/7aed03d9c062ca1354b3d31152795831/info 2023-07-21 05:14:42,298 INFO [StoreOpener-7aed03d9c062ca1354b3d31152795831-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7aed03d9c062ca1354b3d31152795831 columnFamilyName info 2023-07-21 05:14:42,299 INFO [StoreOpener-7aed03d9c062ca1354b3d31152795831-1] regionserver.HStore(310): Store=7aed03d9c062ca1354b3d31152795831/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:42,299 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/namespace/7aed03d9c062ca1354b3d31152795831 2023-07-21 05:14:42,300 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/namespace/7aed03d9c062ca1354b3d31152795831 2023-07-21 05:14:42,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7aed03d9c062ca1354b3d31152795831 2023-07-21 05:14:42,305 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/namespace/7aed03d9c062ca1354b3d31152795831/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:42,306 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7aed03d9c062ca1354b3d31152795831; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12077543360, jitterRate=0.12480887770652771}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:42,306 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7aed03d9c062ca1354b3d31152795831: 2023-07-21 05:14:42,307 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831., pid=6, masterSystemTime=1689916482287 2023-07-21 05:14:42,310 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831. 2023-07-21 05:14:42,311 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831. 2023-07-21 05:14:42,311 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=7aed03d9c062ca1354b3d31152795831, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40459,1689916480762 2023-07-21 05:14:42,312 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689916482311"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916482311"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916482311"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916482311"}]},"ts":"1689916482311"} 2023-07-21 05:14:42,315 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-21 05:14:42,315 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 7aed03d9c062ca1354b3d31152795831, server=jenkins-hbase4.apache.org,40459,1689916480762 in 179 msec 2023-07-21 05:14:42,316 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-21 05:14:42,316 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=7aed03d9c062ca1354b3d31152795831, ASSIGN in 337 msec 2023-07-21 05:14:42,317 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 05:14:42,317 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916482317"}]},"ts":"1689916482317"} 2023-07-21 05:14:42,318 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 05:14:42,321 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 05:14:42,323 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 384 msec 2023-07-21 05:14:42,339 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 05:14:42,340 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 05:14:42,340 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:42,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 05:14:42,345 INFO [jenkins-hbase4:42797] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 05:14:42,346 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=7e1bc2391878d842697e86d0ef8d60f2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42737,1689916480704 2023-07-21 05:14:42,346 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689916482346"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916482346"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916482346"}]},"ts":"1689916482346"} 2023-07-21 05:14:42,348 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure 7e1bc2391878d842697e86d0ef8d60f2, server=jenkins-hbase4.apache.org,42737,1689916480704}] 2023-07-21 05:14:42,348 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44960, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 05:14:42,354 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 05:14:42,362 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 05:14:42,365 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-07-21 05:14:42,376 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 05:14:42,379 DEBUG [PEWorker-5] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-21 05:14:42,379 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 05:14:42,508 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2. 2023-07-21 05:14:42,509 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7e1bc2391878d842697e86d0ef8d60f2, NAME => 'hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:42,509 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 05:14:42,509 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2. service=MultiRowMutationService 2023-07-21 05:14:42,509 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 05:14:42,509 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 7e1bc2391878d842697e86d0ef8d60f2 2023-07-21 05:14:42,509 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:42,509 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7e1bc2391878d842697e86d0ef8d60f2 2023-07-21 05:14:42,509 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7e1bc2391878d842697e86d0ef8d60f2 2023-07-21 05:14:42,510 INFO [StoreOpener-7e1bc2391878d842697e86d0ef8d60f2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 7e1bc2391878d842697e86d0ef8d60f2 2023-07-21 05:14:42,512 DEBUG [StoreOpener-7e1bc2391878d842697e86d0ef8d60f2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/rsgroup/7e1bc2391878d842697e86d0ef8d60f2/m 2023-07-21 05:14:42,512 DEBUG [StoreOpener-7e1bc2391878d842697e86d0ef8d60f2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/rsgroup/7e1bc2391878d842697e86d0ef8d60f2/m 2023-07-21 05:14:42,512 INFO [StoreOpener-7e1bc2391878d842697e86d0ef8d60f2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7e1bc2391878d842697e86d0ef8d60f2 columnFamilyName m 2023-07-21 05:14:42,513 INFO [StoreOpener-7e1bc2391878d842697e86d0ef8d60f2-1] regionserver.HStore(310): Store=7e1bc2391878d842697e86d0ef8d60f2/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:42,514 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/rsgroup/7e1bc2391878d842697e86d0ef8d60f2 2023-07-21 05:14:42,514 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/rsgroup/7e1bc2391878d842697e86d0ef8d60f2 2023-07-21 05:14:42,517 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7e1bc2391878d842697e86d0ef8d60f2 2023-07-21 05:14:42,519 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/rsgroup/7e1bc2391878d842697e86d0ef8d60f2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:42,520 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7e1bc2391878d842697e86d0ef8d60f2; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@44918bbf, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:42,520 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7e1bc2391878d842697e86d0ef8d60f2: 2023-07-21 05:14:42,520 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2., pid=9, masterSystemTime=1689916482505 2023-07-21 05:14:42,522 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2. 2023-07-21 05:14:42,522 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2. 2023-07-21 05:14:42,522 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=7e1bc2391878d842697e86d0ef8d60f2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42737,1689916480704 2023-07-21 05:14:42,522 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689916482522"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916482522"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916482522"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916482522"}]},"ts":"1689916482522"} 2023-07-21 05:14:42,525 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-21 05:14:42,525 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure 7e1bc2391878d842697e86d0ef8d60f2, server=jenkins-hbase4.apache.org,42737,1689916480704 in 176 msec 2023-07-21 05:14:42,527 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-21 05:14:42,527 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=7e1bc2391878d842697e86d0ef8d60f2, ASSIGN in 332 msec 2023-07-21 05:14:42,533 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 05:14:42,535 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 159 msec 2023-07-21 05:14:42,536 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 05:14:42,536 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916482536"}]},"ts":"1689916482536"} 2023-07-21 05:14:42,537 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 05:14:42,541 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 05:14:42,542 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 05:14:42,542 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 389 msec 2023-07-21 05:14:42,545 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 05:14:42,545 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.664sec 2023-07-21 05:14:42,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-21 05:14:42,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:42,547 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-21 05:14:42,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-21 05:14:42,549 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:42,549 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 05:14:42,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-21 05:14:42,551 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/hbase/quota/58ec4c74e26768cce83095575280aeae 2023-07-21 05:14:42,552 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/hbase/quota/58ec4c74e26768cce83095575280aeae empty. 2023-07-21 05:14:42,552 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/hbase/quota/58ec4c74e26768cce83095575280aeae 2023-07-21 05:14:42,552 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-21 05:14:42,557 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-21 05:14:42,557 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-21 05:14:42,560 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42797,1689916480560] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 05:14:42,560 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42797,1689916480560] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 05:14:42,560 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:42,561 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:42,561 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 05:14:42,561 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 05:14:42,561 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42797,1689916480560-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 05:14:42,561 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42797,1689916480560-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 05:14:42,563 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 05:14:42,564 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:42,565 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42797,1689916480560] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:42,566 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42797,1689916480560] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 05:14:42,567 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42797,1689916480560] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 05:14:42,576 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:42,577 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 58ec4c74e26768cce83095575280aeae, NAME => 'hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp 2023-07-21 05:14:42,577 DEBUG [Listener at localhost/40271] zookeeper.ReadOnlyZKClient(139): Connect 0x3cd6d70d to 127.0.0.1:60035 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:42,590 DEBUG [Listener at localhost/40271] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2ac28cc3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:42,597 DEBUG [hconnection-0x46fa0971-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 05:14:42,600 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51946, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 05:14:42,602 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,42797,1689916480560 2023-07-21 05:14:42,602 INFO [Listener at localhost/40271] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:42,606 DEBUG [Listener at localhost/40271] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 05:14:42,611 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:42,611 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 58ec4c74e26768cce83095575280aeae, disabling compactions & flushes 2023-07-21 05:14:42,611 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae. 2023-07-21 05:14:42,611 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae. 2023-07-21 05:14:42,611 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae. after waiting 0 ms 2023-07-21 05:14:42,611 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae. 2023-07-21 05:14:42,611 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae. 2023-07-21 05:14:42,611 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 58ec4c74e26768cce83095575280aeae: 2023-07-21 05:14:42,612 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37584, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 05:14:42,615 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 05:14:42,616 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 05:14:42,616 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:42,616 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689916482616"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916482616"}]},"ts":"1689916482616"} 2023-07-21 05:14:42,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-21 05:14:42,617 DEBUG [Listener at localhost/40271] zookeeper.ReadOnlyZKClient(139): Connect 0x0900ac95 to 127.0.0.1:60035 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:42,618 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 05:14:42,622 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 05:14:42,623 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916482622"}]},"ts":"1689916482622"} 2023-07-21 05:14:42,624 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-21 05:14:42,624 DEBUG [Listener at localhost/40271] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1fa78d45, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:42,625 INFO [Listener at localhost/40271] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:60035 2023-07-21 05:14:42,629 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:42,629 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 05:14:42,629 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:42,629 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:42,629 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:42,629 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:42,630 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101864d9f18000a connected 2023-07-21 05:14:42,630 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=58ec4c74e26768cce83095575280aeae, ASSIGN}] 2023-07-21 05:14:42,631 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=58ec4c74e26768cce83095575280aeae, ASSIGN 2023-07-21 05:14:42,632 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=58ec4c74e26768cce83095575280aeae, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40459,1689916480762; forceNewPlan=false, retain=false 2023-07-21 05:14:42,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-21 05:14:42,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-21 05:14:42,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-21 05:14:42,648 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 05:14:42,652 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 15 msec 2023-07-21 05:14:42,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-21 05:14:42,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:42,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-21 05:14:42,754 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:42,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-21 05:14:42,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 05:14:42,756 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:42,756 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 05:14:42,758 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 05:14:42,760 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/np1/table1/1520cab57722302223d8ae6ca596944b 2023-07-21 05:14:42,760 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/np1/table1/1520cab57722302223d8ae6ca596944b empty. 2023-07-21 05:14:42,761 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/np1/table1/1520cab57722302223d8ae6ca596944b 2023-07-21 05:14:42,761 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-21 05:14:42,773 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:42,774 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1520cab57722302223d8ae6ca596944b, NAME => 'np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp 2023-07-21 05:14:42,782 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:42,782 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 1520cab57722302223d8ae6ca596944b, disabling compactions & flushes 2023-07-21 05:14:42,782 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b. 2023-07-21 05:14:42,782 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b. 2023-07-21 05:14:42,782 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b. after waiting 0 ms 2023-07-21 05:14:42,782 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b. 2023-07-21 05:14:42,782 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b. 2023-07-21 05:14:42,782 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 1520cab57722302223d8ae6ca596944b: 2023-07-21 05:14:42,782 INFO [jenkins-hbase4:42797] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 05:14:42,783 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=58ec4c74e26768cce83095575280aeae, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40459,1689916480762 2023-07-21 05:14:42,784 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689916482783"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916482783"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916482783"}]},"ts":"1689916482783"} 2023-07-21 05:14:42,785 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 58ec4c74e26768cce83095575280aeae, server=jenkins-hbase4.apache.org,40459,1689916480762}] 2023-07-21 05:14:42,785 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 05:14:42,786 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689916482786"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916482786"}]},"ts":"1689916482786"} 2023-07-21 05:14:42,787 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 05:14:42,788 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 05:14:42,788 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916482788"}]},"ts":"1689916482788"} 2023-07-21 05:14:42,789 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-21 05:14:42,792 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:42,792 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:42,792 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:42,792 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:42,792 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:42,792 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=1520cab57722302223d8ae6ca596944b, ASSIGN}] 2023-07-21 05:14:42,793 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=1520cab57722302223d8ae6ca596944b, ASSIGN 2023-07-21 05:14:42,793 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=1520cab57722302223d8ae6ca596944b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42737,1689916480704; forceNewPlan=false, retain=false 2023-07-21 05:14:42,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 05:14:42,940 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae. 2023-07-21 05:14:42,940 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 58ec4c74e26768cce83095575280aeae, NAME => 'hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:42,940 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 58ec4c74e26768cce83095575280aeae 2023-07-21 05:14:42,940 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:42,941 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 58ec4c74e26768cce83095575280aeae 2023-07-21 05:14:42,941 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 58ec4c74e26768cce83095575280aeae 2023-07-21 05:14:42,942 INFO [StoreOpener-58ec4c74e26768cce83095575280aeae-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 58ec4c74e26768cce83095575280aeae 2023-07-21 05:14:42,943 DEBUG [StoreOpener-58ec4c74e26768cce83095575280aeae-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/quota/58ec4c74e26768cce83095575280aeae/q 2023-07-21 05:14:42,943 DEBUG [StoreOpener-58ec4c74e26768cce83095575280aeae-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/quota/58ec4c74e26768cce83095575280aeae/q 2023-07-21 05:14:42,943 INFO [StoreOpener-58ec4c74e26768cce83095575280aeae-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 58ec4c74e26768cce83095575280aeae columnFamilyName q 2023-07-21 05:14:42,944 INFO [jenkins-hbase4:42797] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 05:14:42,945 INFO [StoreOpener-58ec4c74e26768cce83095575280aeae-1] regionserver.HStore(310): Store=58ec4c74e26768cce83095575280aeae/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:42,945 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=1520cab57722302223d8ae6ca596944b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42737,1689916480704 2023-07-21 05:14:42,945 INFO [StoreOpener-58ec4c74e26768cce83095575280aeae-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 58ec4c74e26768cce83095575280aeae 2023-07-21 05:14:42,945 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689916482945"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916482945"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916482945"}]},"ts":"1689916482945"} 2023-07-21 05:14:42,946 DEBUG [StoreOpener-58ec4c74e26768cce83095575280aeae-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/quota/58ec4c74e26768cce83095575280aeae/u 2023-07-21 05:14:42,946 DEBUG [StoreOpener-58ec4c74e26768cce83095575280aeae-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/quota/58ec4c74e26768cce83095575280aeae/u 2023-07-21 05:14:42,947 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 1520cab57722302223d8ae6ca596944b, server=jenkins-hbase4.apache.org,42737,1689916480704}] 2023-07-21 05:14:42,947 INFO [StoreOpener-58ec4c74e26768cce83095575280aeae-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 58ec4c74e26768cce83095575280aeae columnFamilyName u 2023-07-21 05:14:42,947 INFO [StoreOpener-58ec4c74e26768cce83095575280aeae-1] regionserver.HStore(310): Store=58ec4c74e26768cce83095575280aeae/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:42,948 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/quota/58ec4c74e26768cce83095575280aeae 2023-07-21 05:14:42,949 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/quota/58ec4c74e26768cce83095575280aeae 2023-07-21 05:14:42,950 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-21 05:14:42,951 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 58ec4c74e26768cce83095575280aeae 2023-07-21 05:14:42,953 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/quota/58ec4c74e26768cce83095575280aeae/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:42,954 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 58ec4c74e26768cce83095575280aeae; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11106516000, jitterRate=0.0343749076128006}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-21 05:14:42,954 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 58ec4c74e26768cce83095575280aeae: 2023-07-21 05:14:42,954 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae., pid=16, masterSystemTime=1689916482937 2023-07-21 05:14:42,956 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae. 2023-07-21 05:14:42,956 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae. 2023-07-21 05:14:42,956 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=58ec4c74e26768cce83095575280aeae, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40459,1689916480762 2023-07-21 05:14:42,956 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689916482956"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916482956"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916482956"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916482956"}]},"ts":"1689916482956"} 2023-07-21 05:14:42,959 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-21 05:14:42,959 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 58ec4c74e26768cce83095575280aeae, server=jenkins-hbase4.apache.org,40459,1689916480762 in 172 msec 2023-07-21 05:14:42,960 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-21 05:14:42,960 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=58ec4c74e26768cce83095575280aeae, ASSIGN in 329 msec 2023-07-21 05:14:42,961 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 05:14:42,961 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916482961"}]},"ts":"1689916482961"} 2023-07-21 05:14:42,962 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-21 05:14:42,965 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 05:14:42,967 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 420 msec 2023-07-21 05:14:43,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 05:14:43,102 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b. 2023-07-21 05:14:43,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1520cab57722302223d8ae6ca596944b, NAME => 'np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:43,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 1520cab57722302223d8ae6ca596944b 2023-07-21 05:14:43,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:43,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1520cab57722302223d8ae6ca596944b 2023-07-21 05:14:43,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1520cab57722302223d8ae6ca596944b 2023-07-21 05:14:43,103 INFO [StoreOpener-1520cab57722302223d8ae6ca596944b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 1520cab57722302223d8ae6ca596944b 2023-07-21 05:14:43,105 DEBUG [StoreOpener-1520cab57722302223d8ae6ca596944b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/np1/table1/1520cab57722302223d8ae6ca596944b/fam1 2023-07-21 05:14:43,105 DEBUG [StoreOpener-1520cab57722302223d8ae6ca596944b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/np1/table1/1520cab57722302223d8ae6ca596944b/fam1 2023-07-21 05:14:43,105 INFO [StoreOpener-1520cab57722302223d8ae6ca596944b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1520cab57722302223d8ae6ca596944b columnFamilyName fam1 2023-07-21 05:14:43,106 INFO [StoreOpener-1520cab57722302223d8ae6ca596944b-1] regionserver.HStore(310): Store=1520cab57722302223d8ae6ca596944b/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:43,106 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/np1/table1/1520cab57722302223d8ae6ca596944b 2023-07-21 05:14:43,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/np1/table1/1520cab57722302223d8ae6ca596944b 2023-07-21 05:14:43,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1520cab57722302223d8ae6ca596944b 2023-07-21 05:14:43,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/np1/table1/1520cab57722302223d8ae6ca596944b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:43,112 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1520cab57722302223d8ae6ca596944b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9516468000, jitterRate=-0.11370985209941864}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:43,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1520cab57722302223d8ae6ca596944b: 2023-07-21 05:14:43,113 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b., pid=18, masterSystemTime=1689916483098 2023-07-21 05:14:43,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b. 2023-07-21 05:14:43,114 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b. 2023-07-21 05:14:43,115 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=1520cab57722302223d8ae6ca596944b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42737,1689916480704 2023-07-21 05:14:43,115 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689916483115"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916483115"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916483115"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916483115"}]},"ts":"1689916483115"} 2023-07-21 05:14:43,118 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-21 05:14:43,118 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 1520cab57722302223d8ae6ca596944b, server=jenkins-hbase4.apache.org,42737,1689916480704 in 169 msec 2023-07-21 05:14:43,119 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-21 05:14:43,119 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=1520cab57722302223d8ae6ca596944b, ASSIGN in 326 msec 2023-07-21 05:14:43,120 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 05:14:43,120 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916483120"}]},"ts":"1689916483120"} 2023-07-21 05:14:43,121 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-21 05:14:43,123 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 05:14:43,125 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 373 msec 2023-07-21 05:14:43,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-21 05:14:43,358 INFO [Listener at localhost/40271] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-21 05:14:43,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:43,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-21 05:14:43,362 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:43,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-21 05:14:43,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 05:14:43,381 INFO [PEWorker-3] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=20 msec 2023-07-21 05:14:43,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 05:14:43,467 INFO [Listener at localhost/40271] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-21 05:14:43,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:43,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:43,470 INFO [Listener at localhost/40271] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-21 05:14:43,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-21 05:14:43,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-21 05:14:43,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 05:14:43,475 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916483475"}]},"ts":"1689916483475"} 2023-07-21 05:14:43,477 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-21 05:14:43,480 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-21 05:14:43,481 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=1520cab57722302223d8ae6ca596944b, UNASSIGN}] 2023-07-21 05:14:43,481 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=1520cab57722302223d8ae6ca596944b, UNASSIGN 2023-07-21 05:14:43,482 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=1520cab57722302223d8ae6ca596944b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42737,1689916480704 2023-07-21 05:14:43,482 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689916483482"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916483482"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916483482"}]},"ts":"1689916483482"} 2023-07-21 05:14:43,484 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 1520cab57722302223d8ae6ca596944b, server=jenkins-hbase4.apache.org,42737,1689916480704}] 2023-07-21 05:14:43,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 05:14:43,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1520cab57722302223d8ae6ca596944b 2023-07-21 05:14:43,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1520cab57722302223d8ae6ca596944b, disabling compactions & flushes 2023-07-21 05:14:43,639 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b. 2023-07-21 05:14:43,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b. 2023-07-21 05:14:43,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b. after waiting 0 ms 2023-07-21 05:14:43,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b. 2023-07-21 05:14:43,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/np1/table1/1520cab57722302223d8ae6ca596944b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:43,645 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b. 2023-07-21 05:14:43,645 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1520cab57722302223d8ae6ca596944b: 2023-07-21 05:14:43,646 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1520cab57722302223d8ae6ca596944b 2023-07-21 05:14:43,647 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=1520cab57722302223d8ae6ca596944b, regionState=CLOSED 2023-07-21 05:14:43,647 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689916483647"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916483647"}]},"ts":"1689916483647"} 2023-07-21 05:14:43,657 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-21 05:14:43,657 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 1520cab57722302223d8ae6ca596944b, server=jenkins-hbase4.apache.org,42737,1689916480704 in 171 msec 2023-07-21 05:14:43,659 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-21 05:14:43,659 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=1520cab57722302223d8ae6ca596944b, UNASSIGN in 176 msec 2023-07-21 05:14:43,660 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916483660"}]},"ts":"1689916483660"} 2023-07-21 05:14:43,661 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-21 05:14:43,662 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-21 05:14:43,664 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 192 msec 2023-07-21 05:14:43,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 05:14:43,777 INFO [Listener at localhost/40271] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-21 05:14:43,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-21 05:14:43,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-21 05:14:43,780 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 05:14:43,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-21 05:14:43,781 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 05:14:43,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:43,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 05:14:43,784 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/np1/table1/1520cab57722302223d8ae6ca596944b 2023-07-21 05:14:43,786 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/np1/table1/1520cab57722302223d8ae6ca596944b/fam1, FileablePath, hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/np1/table1/1520cab57722302223d8ae6ca596944b/recovered.edits] 2023-07-21 05:14:43,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-21 05:14:43,791 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/np1/table1/1520cab57722302223d8ae6ca596944b/recovered.edits/4.seqid to hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/archive/data/np1/table1/1520cab57722302223d8ae6ca596944b/recovered.edits/4.seqid 2023-07-21 05:14:43,792 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/.tmp/data/np1/table1/1520cab57722302223d8ae6ca596944b 2023-07-21 05:14:43,792 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-21 05:14:43,794 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 05:14:43,795 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-21 05:14:43,797 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-21 05:14:43,798 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 05:14:43,798 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-21 05:14:43,798 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916483798"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:43,799 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 05:14:43,800 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 1520cab57722302223d8ae6ca596944b, NAME => 'np1:table1,,1689916482750.1520cab57722302223d8ae6ca596944b.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 05:14:43,800 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-21 05:14:43,800 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689916483800"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:43,801 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-21 05:14:43,804 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 05:14:43,805 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 26 msec 2023-07-21 05:14:43,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-21 05:14:43,888 INFO [Listener at localhost/40271] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-21 05:14:43,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-21 05:14:43,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-21 05:14:43,902 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 05:14:43,904 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 05:14:43,906 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 05:14:43,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 05:14:43,908 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-21 05:14:43,908 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 05:14:43,908 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 05:14:43,910 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 05:14:43,911 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 17 msec 2023-07-21 05:14:44,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 05:14:44,008 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 05:14:44,009 INFO [Listener at localhost/40271] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 05:14:44,009 DEBUG [Listener at localhost/40271] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3cd6d70d to 127.0.0.1:60035 2023-07-21 05:14:44,009 DEBUG [Listener at localhost/40271] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:44,009 DEBUG [Listener at localhost/40271] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 05:14:44,009 DEBUG [Listener at localhost/40271] util.JVMClusterUtil(257): Found active master hash=526518317, stopped=false 2023-07-21 05:14:44,009 DEBUG [Listener at localhost/40271] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 05:14:44,009 DEBUG [Listener at localhost/40271] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 05:14:44,009 DEBUG [Listener at localhost/40271] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-21 05:14:44,009 INFO [Listener at localhost/40271] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,42797,1689916480560 2023-07-21 05:14:44,012 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:44,012 INFO [Listener at localhost/40271] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 05:14:44,012 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:44,012 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:44,012 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:41649-0x101864d9f180003, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:44,012 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:40459-0x101864d9f180002, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:44,014 DEBUG [Listener at localhost/40271] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4818754b to 127.0.0.1:60035 2023-07-21 05:14:44,014 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:44,014 DEBUG [Listener at localhost/40271] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:44,014 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41649-0x101864d9f180003, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:44,014 INFO [Listener at localhost/40271] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42737,1689916480704' ***** 2023-07-21 05:14:44,014 INFO [Listener at localhost/40271] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 05:14:44,014 INFO [Listener at localhost/40271] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40459,1689916480762' ***** 2023-07-21 05:14:44,014 INFO [Listener at localhost/40271] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 05:14:44,014 INFO [Listener at localhost/40271] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41649,1689916480817' ***** 2023-07-21 05:14:44,015 INFO [Listener at localhost/40271] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 05:14:44,015 INFO [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 05:14:44,015 INFO [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 05:14:44,015 INFO [RS:2;jenkins-hbase4:41649] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 05:14:44,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:44,016 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40459-0x101864d9f180002, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:44,025 INFO [RS:1;jenkins-hbase4:40459] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5574719f{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:44,026 INFO [RS:1;jenkins-hbase4:40459] server.AbstractConnector(383): Stopped ServerConnector@3fecd666{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 05:14:44,026 INFO [RS:1;jenkins-hbase4:40459] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 05:14:44,027 INFO [RS:1;jenkins-hbase4:40459] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@16387d57{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-21 05:14:44,027 INFO [RS:0;jenkins-hbase4:42737] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@35d76b04{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:44,027 INFO [RS:2;jenkins-hbase4:41649] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2e6aa2b0{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:44,030 INFO [RS:1;jenkins-hbase4:40459] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@15b2670b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/hadoop.log.dir/,STOPPED} 2023-07-21 05:14:44,030 INFO [RS:0;jenkins-hbase4:42737] server.AbstractConnector(383): Stopped ServerConnector@57e9a200{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 05:14:44,030 INFO [RS:2;jenkins-hbase4:41649] server.AbstractConnector(383): Stopped ServerConnector@7929487c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 05:14:44,030 INFO [RS:0;jenkins-hbase4:42737] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 05:14:44,030 INFO [RS:0;jenkins-hbase4:42737] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3a647aaf{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-21 05:14:44,030 INFO [RS:0;jenkins-hbase4:42737] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@636fde24{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/hadoop.log.dir/,STOPPED} 2023-07-21 05:14:44,031 INFO [RS:1;jenkins-hbase4:40459] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 05:14:44,030 INFO [RS:2;jenkins-hbase4:41649] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 05:14:44,031 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 05:14:44,031 INFO [RS:1;jenkins-hbase4:40459] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 05:14:44,032 INFO [RS:1;jenkins-hbase4:40459] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 05:14:44,032 INFO [RS:2;jenkins-hbase4:41649] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@44e67843{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-21 05:14:44,032 INFO [RS:0;jenkins-hbase4:42737] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 05:14:44,032 INFO [RS:2;jenkins-hbase4:41649] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5c0552ff{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/hadoop.log.dir/,STOPPED} 2023-07-21 05:14:44,032 INFO [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer(3305): Received CLOSE for 58ec4c74e26768cce83095575280aeae 2023-07-21 05:14:44,032 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 05:14:44,032 INFO [RS:0;jenkins-hbase4:42737] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 05:14:44,032 INFO [RS:0;jenkins-hbase4:42737] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 05:14:44,032 INFO [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer(3305): Received CLOSE for 7e1bc2391878d842697e86d0ef8d60f2 2023-07-21 05:14:44,032 INFO [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42737,1689916480704 2023-07-21 05:14:44,032 DEBUG [RS:0;jenkins-hbase4:42737] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5dbe7c66 to 127.0.0.1:60035 2023-07-21 05:14:44,033 DEBUG [RS:0;jenkins-hbase4:42737] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:44,033 INFO [RS:0;jenkins-hbase4:42737] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 05:14:44,033 INFO [RS:0;jenkins-hbase4:42737] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 05:14:44,033 INFO [RS:0;jenkins-hbase4:42737] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 05:14:44,033 INFO [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 05:14:44,033 INFO [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer(3305): Received CLOSE for 7aed03d9c062ca1354b3d31152795831 2023-07-21 05:14:44,033 INFO [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40459,1689916480762 2023-07-21 05:14:44,033 DEBUG [RS:1;jenkins-hbase4:40459] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x08173bda to 127.0.0.1:60035 2023-07-21 05:14:44,033 DEBUG [RS:1;jenkins-hbase4:40459] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:44,033 INFO [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-21 05:14:44,033 DEBUG [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer(1478): Online Regions={58ec4c74e26768cce83095575280aeae=hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae., 7aed03d9c062ca1354b3d31152795831=hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831.} 2023-07-21 05:14:44,033 DEBUG [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer(1504): Waiting on 58ec4c74e26768cce83095575280aeae, 7aed03d9c062ca1354b3d31152795831 2023-07-21 05:14:44,033 INFO [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-21 05:14:44,033 DEBUG [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 7e1bc2391878d842697e86d0ef8d60f2=hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2.} 2023-07-21 05:14:44,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7e1bc2391878d842697e86d0ef8d60f2, disabling compactions & flushes 2023-07-21 05:14:44,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 58ec4c74e26768cce83095575280aeae, disabling compactions & flushes 2023-07-21 05:14:44,036 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae. 2023-07-21 05:14:44,036 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae. 2023-07-21 05:14:44,036 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 05:14:44,034 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2. 2023-07-21 05:14:44,034 DEBUG [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer(1504): Waiting on 1588230740, 7e1bc2391878d842697e86d0ef8d60f2 2023-07-21 05:14:44,037 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2. 2023-07-21 05:14:44,037 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 05:14:44,036 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae. after waiting 0 ms 2023-07-21 05:14:44,037 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 05:14:44,037 INFO [RS:2;jenkins-hbase4:41649] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 05:14:44,037 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2. after waiting 0 ms 2023-07-21 05:14:44,037 INFO [RS:2;jenkins-hbase4:41649] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 05:14:44,037 INFO [RS:2;jenkins-hbase4:41649] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 05:14:44,037 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 05:14:44,037 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae. 2023-07-21 05:14:44,037 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 05:14:44,037 INFO [RS:2;jenkins-hbase4:41649] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41649,1689916480817 2023-07-21 05:14:44,037 DEBUG [RS:2;jenkins-hbase4:41649] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x61f10f78 to 127.0.0.1:60035 2023-07-21 05:14:44,037 DEBUG [RS:2;jenkins-hbase4:41649] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:44,037 INFO [RS:2;jenkins-hbase4:41649] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41649,1689916480817; all regions closed. 2023-07-21 05:14:44,037 DEBUG [RS:2;jenkins-hbase4:41649] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 05:14:44,037 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2. 2023-07-21 05:14:44,037 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 05:14:44,038 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 7e1bc2391878d842697e86d0ef8d60f2 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-21 05:14:44,037 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-21 05:14:44,054 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:44,057 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/quota/58ec4c74e26768cce83095575280aeae/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:44,059 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/WALs/jenkins-hbase4.apache.org,41649,1689916480817/jenkins-hbase4.apache.org%2C41649%2C1689916480817.1689916481294 not finished, retry = 0 2023-07-21 05:14:44,059 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae. 2023-07-21 05:14:44,059 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 58ec4c74e26768cce83095575280aeae: 2023-07-21 05:14:44,059 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689916482546.58ec4c74e26768cce83095575280aeae. 2023-07-21 05:14:44,059 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7aed03d9c062ca1354b3d31152795831, disabling compactions & flushes 2023-07-21 05:14:44,060 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831. 2023-07-21 05:14:44,060 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831. 2023-07-21 05:14:44,060 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831. after waiting 0 ms 2023-07-21 05:14:44,060 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831. 2023-07-21 05:14:44,060 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 7aed03d9c062ca1354b3d31152795831 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-21 05:14:44,065 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:44,065 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:44,075 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/rsgroup/7e1bc2391878d842697e86d0ef8d60f2/.tmp/m/f79ad147a6d346dda25bff2c83afa869 2023-07-21 05:14:44,082 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/.tmp/info/162df28c75b14cc6800b8be370b1defb 2023-07-21 05:14:44,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/rsgroup/7e1bc2391878d842697e86d0ef8d60f2/.tmp/m/f79ad147a6d346dda25bff2c83afa869 as hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/rsgroup/7e1bc2391878d842697e86d0ef8d60f2/m/f79ad147a6d346dda25bff2c83afa869 2023-07-21 05:14:44,089 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/namespace/7aed03d9c062ca1354b3d31152795831/.tmp/info/5eba2cb8929c4c5e98e6526761dcff1f 2023-07-21 05:14:44,090 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 162df28c75b14cc6800b8be370b1defb 2023-07-21 05:14:44,094 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/rsgroup/7e1bc2391878d842697e86d0ef8d60f2/m/f79ad147a6d346dda25bff2c83afa869, entries=1, sequenceid=7, filesize=4.9 K 2023-07-21 05:14:44,098 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5eba2cb8929c4c5e98e6526761dcff1f 2023-07-21 05:14:44,098 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for 7e1bc2391878d842697e86d0ef8d60f2 in 60ms, sequenceid=7, compaction requested=false 2023-07-21 05:14:44,098 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 05:14:44,098 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/namespace/7aed03d9c062ca1354b3d31152795831/.tmp/info/5eba2cb8929c4c5e98e6526761dcff1f as hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/namespace/7aed03d9c062ca1354b3d31152795831/info/5eba2cb8929c4c5e98e6526761dcff1f 2023-07-21 05:14:44,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/rsgroup/7e1bc2391878d842697e86d0ef8d60f2/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-21 05:14:44,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 05:14:44,107 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2. 2023-07-21 05:14:44,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7e1bc2391878d842697e86d0ef8d60f2: 2023-07-21 05:14:44,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689916482152.7e1bc2391878d842697e86d0ef8d60f2. 2023-07-21 05:14:44,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5eba2cb8929c4c5e98e6526761dcff1f 2023-07-21 05:14:44,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/namespace/7aed03d9c062ca1354b3d31152795831/info/5eba2cb8929c4c5e98e6526761dcff1f, entries=3, sequenceid=8, filesize=5.0 K 2023-07-21 05:14:44,110 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 7aed03d9c062ca1354b3d31152795831 in 50ms, sequenceid=8, compaction requested=false 2023-07-21 05:14:44,110 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 05:14:44,112 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/.tmp/rep_barrier/d2f73a9d5846475090017fc9c921fe54 2023-07-21 05:14:44,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/namespace/7aed03d9c062ca1354b3d31152795831/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-21 05:14:44,119 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831. 2023-07-21 05:14:44,119 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7aed03d9c062ca1354b3d31152795831: 2023-07-21 05:14:44,119 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689916481937.7aed03d9c062ca1354b3d31152795831. 2023-07-21 05:14:44,120 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d2f73a9d5846475090017fc9c921fe54 2023-07-21 05:14:44,134 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/.tmp/table/6daec4b39200420f9c36d01b964f3161 2023-07-21 05:14:44,140 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6daec4b39200420f9c36d01b964f3161 2023-07-21 05:14:44,141 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/.tmp/info/162df28c75b14cc6800b8be370b1defb as hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/info/162df28c75b14cc6800b8be370b1defb 2023-07-21 05:14:44,147 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 162df28c75b14cc6800b8be370b1defb 2023-07-21 05:14:44,147 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/info/162df28c75b14cc6800b8be370b1defb, entries=32, sequenceid=31, filesize=8.5 K 2023-07-21 05:14:44,148 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/.tmp/rep_barrier/d2f73a9d5846475090017fc9c921fe54 as hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/rep_barrier/d2f73a9d5846475090017fc9c921fe54 2023-07-21 05:14:44,152 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 05:14:44,153 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 05:14:44,153 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d2f73a9d5846475090017fc9c921fe54 2023-07-21 05:14:44,153 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/rep_barrier/d2f73a9d5846475090017fc9c921fe54, entries=1, sequenceid=31, filesize=4.9 K 2023-07-21 05:14:44,154 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/.tmp/table/6daec4b39200420f9c36d01b964f3161 as hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/table/6daec4b39200420f9c36d01b964f3161 2023-07-21 05:14:44,159 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6daec4b39200420f9c36d01b964f3161 2023-07-21 05:14:44,159 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/table/6daec4b39200420f9c36d01b964f3161, entries=8, sequenceid=31, filesize=5.2 K 2023-07-21 05:14:44,161 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 124ms, sequenceid=31, compaction requested=false 2023-07-21 05:14:44,161 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 05:14:44,161 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 05:14:44,162 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 05:14:44,163 DEBUG [RS:2;jenkins-hbase4:41649] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/oldWALs 2023-07-21 05:14:44,163 INFO [RS:2;jenkins-hbase4:41649] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41649%2C1689916480817:(num 1689916481294) 2023-07-21 05:14:44,163 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 05:14:44,163 DEBUG [RS:2;jenkins-hbase4:41649] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:44,163 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 05:14:44,165 INFO [RS:2;jenkins-hbase4:41649] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:44,168 INFO [RS:2;jenkins-hbase4:41649] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 05:14:44,169 INFO [RS:2;jenkins-hbase4:41649] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 05:14:44,169 INFO [RS:2;jenkins-hbase4:41649] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 05:14:44,169 INFO [RS:2;jenkins-hbase4:41649] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 05:14:44,169 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 05:14:44,170 INFO [RS:2;jenkins-hbase4:41649] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41649 2023-07-21 05:14:44,174 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:40459-0x101864d9f180002, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41649,1689916480817 2023-07-21 05:14:44,174 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:44,174 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:40459-0x101864d9f180002, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:44,174 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:41649-0x101864d9f180003, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41649,1689916480817 2023-07-21 05:14:44,174 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41649,1689916480817 2023-07-21 05:14:44,174 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:41649-0x101864d9f180003, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:44,174 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:44,174 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41649,1689916480817] 2023-07-21 05:14:44,175 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41649,1689916480817; numProcessing=1 2023-07-21 05:14:44,177 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41649,1689916480817 already deleted, retry=false 2023-07-21 05:14:44,177 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41649,1689916480817 expired; onlineServers=2 2023-07-21 05:14:44,177 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-21 05:14:44,177 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 05:14:44,178 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 05:14:44,178 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 05:14:44,178 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 05:14:44,233 INFO [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40459,1689916480762; all regions closed. 2023-07-21 05:14:44,233 DEBUG [RS:1;jenkins-hbase4:40459] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 05:14:44,237 INFO [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42737,1689916480704; all regions closed. 2023-07-21 05:14:44,237 DEBUG [RS:0;jenkins-hbase4:42737] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 05:14:44,245 DEBUG [RS:1;jenkins-hbase4:40459] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/oldWALs 2023-07-21 05:14:44,245 INFO [RS:1;jenkins-hbase4:40459] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40459%2C1689916480762:(num 1689916481297) 2023-07-21 05:14:44,246 DEBUG [RS:1;jenkins-hbase4:40459] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:44,246 INFO [RS:1;jenkins-hbase4:40459] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:44,246 INFO [RS:1;jenkins-hbase4:40459] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 05:14:44,246 DEBUG [RS:0;jenkins-hbase4:42737] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/oldWALs 2023-07-21 05:14:44,246 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 05:14:44,246 INFO [RS:1;jenkins-hbase4:40459] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 05:14:44,246 INFO [RS:1;jenkins-hbase4:40459] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 05:14:44,246 INFO [RS:1;jenkins-hbase4:40459] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 05:14:44,246 INFO [RS:0;jenkins-hbase4:42737] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42737%2C1689916480704.meta:.meta(num 1689916481876) 2023-07-21 05:14:44,247 INFO [RS:1;jenkins-hbase4:40459] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40459 2023-07-21 05:14:44,252 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:40459-0x101864d9f180002, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40459,1689916480762 2023-07-21 05:14:44,252 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:44,252 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40459,1689916480762 2023-07-21 05:14:44,252 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40459,1689916480762] 2023-07-21 05:14:44,252 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40459,1689916480762; numProcessing=2 2023-07-21 05:14:44,254 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40459,1689916480762 already deleted, retry=false 2023-07-21 05:14:44,254 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40459,1689916480762 expired; onlineServers=1 2023-07-21 05:14:44,254 DEBUG [RS:0;jenkins-hbase4:42737] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/oldWALs 2023-07-21 05:14:44,254 INFO [RS:0;jenkins-hbase4:42737] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42737%2C1689916480704:(num 1689916481313) 2023-07-21 05:14:44,254 DEBUG [RS:0;jenkins-hbase4:42737] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:44,254 INFO [RS:0;jenkins-hbase4:42737] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:44,254 INFO [RS:0;jenkins-hbase4:42737] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 05:14:44,254 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 05:14:44,255 INFO [RS:0;jenkins-hbase4:42737] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42737 2023-07-21 05:14:44,258 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42737,1689916480704 2023-07-21 05:14:44,258 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:44,260 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42737,1689916480704] 2023-07-21 05:14:44,260 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42737,1689916480704; numProcessing=3 2023-07-21 05:14:44,261 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42737,1689916480704 already deleted, retry=false 2023-07-21 05:14:44,262 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42737,1689916480704 expired; onlineServers=0 2023-07-21 05:14:44,262 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42797,1689916480560' ***** 2023-07-21 05:14:44,262 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 05:14:44,262 DEBUG [M:0;jenkins-hbase4:42797] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b5d9f19, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 05:14:44,262 INFO [M:0;jenkins-hbase4:42797] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 05:14:44,263 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 05:14:44,264 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:44,264 INFO [M:0;jenkins-hbase4:42797] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@17183753{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-21 05:14:44,264 INFO [M:0;jenkins-hbase4:42797] server.AbstractConnector(383): Stopped ServerConnector@140a6448{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 05:14:44,264 INFO [M:0;jenkins-hbase4:42797] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 05:14:44,265 INFO [M:0;jenkins-hbase4:42797] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@668a6b88{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-21 05:14:44,265 INFO [M:0;jenkins-hbase4:42797] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@60e1aefd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/hadoop.log.dir/,STOPPED} 2023-07-21 05:14:44,266 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 05:14:44,266 INFO [M:0;jenkins-hbase4:42797] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42797,1689916480560 2023-07-21 05:14:44,266 INFO [M:0;jenkins-hbase4:42797] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42797,1689916480560; all regions closed. 2023-07-21 05:14:44,266 DEBUG [M:0;jenkins-hbase4:42797] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:44,266 INFO [M:0;jenkins-hbase4:42797] master.HMaster(1491): Stopping master jetty server 2023-07-21 05:14:44,267 INFO [M:0;jenkins-hbase4:42797] server.AbstractConnector(383): Stopped ServerConnector@3bc8cd48{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 05:14:44,268 DEBUG [M:0;jenkins-hbase4:42797] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 05:14:44,268 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 05:14:44,268 DEBUG [M:0;jenkins-hbase4:42797] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 05:14:44,268 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689916481077] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689916481077,5,FailOnTimeoutGroup] 2023-07-21 05:14:44,268 INFO [M:0;jenkins-hbase4:42797] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 05:14:44,268 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689916481077] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689916481077,5,FailOnTimeoutGroup] 2023-07-21 05:14:44,268 INFO [M:0;jenkins-hbase4:42797] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 05:14:44,269 INFO [M:0;jenkins-hbase4:42797] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 05:14:44,269 DEBUG [M:0;jenkins-hbase4:42797] master.HMaster(1512): Stopping service threads 2023-07-21 05:14:44,269 INFO [M:0;jenkins-hbase4:42797] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 05:14:44,269 ERROR [M:0;jenkins-hbase4:42797] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-21 05:14:44,269 INFO [M:0;jenkins-hbase4:42797] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 05:14:44,269 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 05:14:44,270 DEBUG [M:0;jenkins-hbase4:42797] zookeeper.ZKUtil(398): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 05:14:44,270 WARN [M:0;jenkins-hbase4:42797] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 05:14:44,270 INFO [M:0;jenkins-hbase4:42797] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 05:14:44,271 INFO [M:0;jenkins-hbase4:42797] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 05:14:44,271 DEBUG [M:0;jenkins-hbase4:42797] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 05:14:44,271 INFO [M:0;jenkins-hbase4:42797] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:44,271 DEBUG [M:0;jenkins-hbase4:42797] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:44,271 DEBUG [M:0;jenkins-hbase4:42797] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 05:14:44,271 DEBUG [M:0;jenkins-hbase4:42797] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:44,271 INFO [M:0;jenkins-hbase4:42797] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.99 KB heapSize=109.13 KB 2023-07-21 05:14:44,286 INFO [M:0;jenkins-hbase4:42797] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.99 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/029c8c0d09ea489d81e72cd820d7e46d 2023-07-21 05:14:44,292 DEBUG [M:0;jenkins-hbase4:42797] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/029c8c0d09ea489d81e72cd820d7e46d as hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/029c8c0d09ea489d81e72cd820d7e46d 2023-07-21 05:14:44,298 INFO [M:0;jenkins-hbase4:42797] regionserver.HStore(1080): Added hdfs://localhost:37015/user/jenkins/test-data/b6340c68-6c8d-4819-f6fe-d8f4ba5283b9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/029c8c0d09ea489d81e72cd820d7e46d, entries=24, sequenceid=194, filesize=12.4 K 2023-07-21 05:14:44,299 INFO [M:0;jenkins-hbase4:42797] regionserver.HRegion(2948): Finished flush of dataSize ~92.99 KB/95225, heapSize ~109.11 KB/111728, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=194, compaction requested=false 2023-07-21 05:14:44,300 INFO [M:0;jenkins-hbase4:42797] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:44,300 DEBUG [M:0;jenkins-hbase4:42797] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 05:14:44,305 INFO [M:0;jenkins-hbase4:42797] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 05:14:44,305 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 05:14:44,305 INFO [M:0;jenkins-hbase4:42797] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42797 2023-07-21 05:14:44,307 DEBUG [M:0;jenkins-hbase4:42797] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,42797,1689916480560 already deleted, retry=false 2023-07-21 05:14:44,612 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:44,612 INFO [M:0;jenkins-hbase4:42797] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42797,1689916480560; zookeeper connection closed. 2023-07-21 05:14:44,612 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): master:42797-0x101864d9f180000, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:44,712 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:44,712 INFO [RS:0;jenkins-hbase4:42737] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42737,1689916480704; zookeeper connection closed. 2023-07-21 05:14:44,712 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:42737-0x101864d9f180001, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:44,714 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7fcea5a0] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7fcea5a0 2023-07-21 05:14:44,813 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:40459-0x101864d9f180002, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:44,813 INFO [RS:1;jenkins-hbase4:40459] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40459,1689916480762; zookeeper connection closed. 2023-07-21 05:14:44,813 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:40459-0x101864d9f180002, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:44,814 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@792461d0] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@792461d0 2023-07-21 05:14:44,913 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:41649-0x101864d9f180003, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:44,913 INFO [RS:2;jenkins-hbase4:41649] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41649,1689916480817; zookeeper connection closed. 2023-07-21 05:14:44,913 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): regionserver:41649-0x101864d9f180003, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:44,919 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4d5cac58] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4d5cac58 2023-07-21 05:14:44,919 INFO [Listener at localhost/40271] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-21 05:14:44,919 WARN [Listener at localhost/40271] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 05:14:44,924 INFO [Listener at localhost/40271] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 05:14:45,031 WARN [BP-637425181-172.31.14.131-1689916479667 heartbeating to localhost/127.0.0.1:37015] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 05:14:45,031 WARN [BP-637425181-172.31.14.131-1689916479667 heartbeating to localhost/127.0.0.1:37015] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-637425181-172.31.14.131-1689916479667 (Datanode Uuid 4663a5df-d10e-4a7e-90d9-aa5d7b13cf1a) service to localhost/127.0.0.1:37015 2023-07-21 05:14:45,032 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/cluster_50581fd3-74e7-03b1-a21e-3a0135a2efb1/dfs/data/data5/current/BP-637425181-172.31.14.131-1689916479667] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:45,032 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/cluster_50581fd3-74e7-03b1-a21e-3a0135a2efb1/dfs/data/data6/current/BP-637425181-172.31.14.131-1689916479667] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:45,034 WARN [Listener at localhost/40271] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 05:14:45,039 INFO [Listener at localhost/40271] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 05:14:45,143 WARN [BP-637425181-172.31.14.131-1689916479667 heartbeating to localhost/127.0.0.1:37015] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 05:14:45,143 WARN [BP-637425181-172.31.14.131-1689916479667 heartbeating to localhost/127.0.0.1:37015] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-637425181-172.31.14.131-1689916479667 (Datanode Uuid 2af665ec-186d-4352-a73f-4a965871ad0b) service to localhost/127.0.0.1:37015 2023-07-21 05:14:45,144 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/cluster_50581fd3-74e7-03b1-a21e-3a0135a2efb1/dfs/data/data3/current/BP-637425181-172.31.14.131-1689916479667] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:45,145 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/cluster_50581fd3-74e7-03b1-a21e-3a0135a2efb1/dfs/data/data4/current/BP-637425181-172.31.14.131-1689916479667] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:45,146 WARN [Listener at localhost/40271] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 05:14:45,154 INFO [Listener at localhost/40271] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 05:14:45,256 WARN [BP-637425181-172.31.14.131-1689916479667 heartbeating to localhost/127.0.0.1:37015] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 05:14:45,256 WARN [BP-637425181-172.31.14.131-1689916479667 heartbeating to localhost/127.0.0.1:37015] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-637425181-172.31.14.131-1689916479667 (Datanode Uuid 5bedb4b4-0ac8-4728-8a89-4a5551b8d750) service to localhost/127.0.0.1:37015 2023-07-21 05:14:45,257 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/cluster_50581fd3-74e7-03b1-a21e-3a0135a2efb1/dfs/data/data1/current/BP-637425181-172.31.14.131-1689916479667] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:45,258 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/cluster_50581fd3-74e7-03b1-a21e-3a0135a2efb1/dfs/data/data2/current/BP-637425181-172.31.14.131-1689916479667] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:45,268 INFO [Listener at localhost/40271] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 05:14:45,389 INFO [Listener at localhost/40271] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 05:14:45,416 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-21 05:14:45,416 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 05:14:45,416 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/hadoop.log.dir so I do NOT create it in target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c 2023-07-21 05:14:45,416 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f49dcd16-455d-a33c-9b04-ceae32e9a882/hadoop.tmp.dir so I do NOT create it in target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c 2023-07-21 05:14:45,416 INFO [Listener at localhost/40271] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81, deleteOnExit=true 2023-07-21 05:14:45,416 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 05:14:45,416 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/test.cache.data in system properties and HBase conf 2023-07-21 05:14:45,416 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 05:14:45,416 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/hadoop.log.dir in system properties and HBase conf 2023-07-21 05:14:45,416 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 05:14:45,417 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 05:14:45,417 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 05:14:45,417 DEBUG [Listener at localhost/40271] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 05:14:45,417 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 05:14:45,417 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 05:14:45,417 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 05:14:45,417 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 05:14:45,417 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 05:14:45,418 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 05:14:45,418 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 05:14:45,418 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 05:14:45,418 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 05:14:45,418 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/nfs.dump.dir in system properties and HBase conf 2023-07-21 05:14:45,418 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/java.io.tmpdir in system properties and HBase conf 2023-07-21 05:14:45,418 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 05:14:45,418 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 05:14:45,418 INFO [Listener at localhost/40271] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 05:14:45,422 WARN [Listener at localhost/40271] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 05:14:45,422 WARN [Listener at localhost/40271] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 05:14:45,487 DEBUG [Listener at localhost/40271-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101864d9f18000a, quorum=127.0.0.1:60035, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-21 05:14:45,488 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101864d9f18000a, quorum=127.0.0.1:60035, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-21 05:14:45,495 WARN [Listener at localhost/40271] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 05:14:45,497 INFO [Listener at localhost/40271] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 05:14:45,503 INFO [Listener at localhost/40271] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/java.io.tmpdir/Jetty_localhost_41179_hdfs____9behgg/webapp 2023-07-21 05:14:45,597 INFO [Listener at localhost/40271] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41179 2023-07-21 05:14:45,601 WARN [Listener at localhost/40271] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 05:14:45,601 WARN [Listener at localhost/40271] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 05:14:45,640 WARN [Listener at localhost/35849] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 05:14:45,656 WARN [Listener at localhost/35849] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 05:14:45,658 WARN [Listener at localhost/35849] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 05:14:45,660 INFO [Listener at localhost/35849] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 05:14:45,666 INFO [Listener at localhost/35849] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/java.io.tmpdir/Jetty_localhost_38623_datanode____.ti6nqf/webapp 2023-07-21 05:14:45,760 INFO [Listener at localhost/35849] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38623 2023-07-21 05:14:45,769 WARN [Listener at localhost/36809] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 05:14:45,789 WARN [Listener at localhost/36809] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 05:14:45,792 WARN [Listener at localhost/36809] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 05:14:45,793 INFO [Listener at localhost/36809] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 05:14:45,798 INFO [Listener at localhost/36809] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/java.io.tmpdir/Jetty_localhost_35113_datanode____ll4jq0/webapp 2023-07-21 05:14:45,891 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf4e1c86fa2def731: Processing first storage report for DS-6bfeb7ff-bbb3-492b-96da-3baf630cebf8 from datanode 4108575c-63fe-404e-ae9f-65cacfd20645 2023-07-21 05:14:45,891 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf4e1c86fa2def731: from storage DS-6bfeb7ff-bbb3-492b-96da-3baf630cebf8 node DatanodeRegistration(127.0.0.1:44725, datanodeUuid=4108575c-63fe-404e-ae9f-65cacfd20645, infoPort=36707, infoSecurePort=0, ipcPort=36809, storageInfo=lv=-57;cid=testClusterID;nsid=1203794472;c=1689916485425), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 05:14:45,891 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf4e1c86fa2def731: Processing first storage report for DS-f30e8b06-c226-4f5b-a417-fc7de644f2f9 from datanode 4108575c-63fe-404e-ae9f-65cacfd20645 2023-07-21 05:14:45,891 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf4e1c86fa2def731: from storage DS-f30e8b06-c226-4f5b-a417-fc7de644f2f9 node DatanodeRegistration(127.0.0.1:44725, datanodeUuid=4108575c-63fe-404e-ae9f-65cacfd20645, infoPort=36707, infoSecurePort=0, ipcPort=36809, storageInfo=lv=-57;cid=testClusterID;nsid=1203794472;c=1689916485425), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 05:14:45,908 INFO [Listener at localhost/36809] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35113 2023-07-21 05:14:45,915 WARN [Listener at localhost/35091] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 05:14:45,945 WARN [Listener at localhost/35091] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 05:14:45,947 WARN [Listener at localhost/35091] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 05:14:45,948 INFO [Listener at localhost/35091] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 05:14:45,956 INFO [Listener at localhost/35091] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/java.io.tmpdir/Jetty_localhost_34685_datanode____ffexeh/webapp 2023-07-21 05:14:46,039 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe14fb53310a83894: Processing first storage report for DS-860d6c57-d735-4cdf-9619-83aac57320ef from datanode 4efdc82f-9fce-4092-a75a-6319dec85e27 2023-07-21 05:14:46,039 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe14fb53310a83894: from storage DS-860d6c57-d735-4cdf-9619-83aac57320ef node DatanodeRegistration(127.0.0.1:36759, datanodeUuid=4efdc82f-9fce-4092-a75a-6319dec85e27, infoPort=37489, infoSecurePort=0, ipcPort=35091, storageInfo=lv=-57;cid=testClusterID;nsid=1203794472;c=1689916485425), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 05:14:46,039 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe14fb53310a83894: Processing first storage report for DS-ed689115-6ad5-4918-826d-c29bc70bccd9 from datanode 4efdc82f-9fce-4092-a75a-6319dec85e27 2023-07-21 05:14:46,039 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe14fb53310a83894: from storage DS-ed689115-6ad5-4918-826d-c29bc70bccd9 node DatanodeRegistration(127.0.0.1:36759, datanodeUuid=4efdc82f-9fce-4092-a75a-6319dec85e27, infoPort=37489, infoSecurePort=0, ipcPort=35091, storageInfo=lv=-57;cid=testClusterID;nsid=1203794472;c=1689916485425), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 05:14:46,065 INFO [Listener at localhost/35091] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34685 2023-07-21 05:14:46,072 WARN [Listener at localhost/37815] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 05:14:46,180 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9da84b885319bf59: Processing first storage report for DS-1b56bf0b-21b4-496a-8b90-ec8643561175 from datanode f15140a7-f1a3-48f1-9fe1-47efe4e38024 2023-07-21 05:14:46,180 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9da84b885319bf59: from storage DS-1b56bf0b-21b4-496a-8b90-ec8643561175 node DatanodeRegistration(127.0.0.1:36355, datanodeUuid=f15140a7-f1a3-48f1-9fe1-47efe4e38024, infoPort=44949, infoSecurePort=0, ipcPort=37815, storageInfo=lv=-57;cid=testClusterID;nsid=1203794472;c=1689916485425), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 05:14:46,180 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9da84b885319bf59: Processing first storage report for DS-c523281f-5be6-4de9-b023-440e87b4b31b from datanode f15140a7-f1a3-48f1-9fe1-47efe4e38024 2023-07-21 05:14:46,180 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9da84b885319bf59: from storage DS-c523281f-5be6-4de9-b023-440e87b4b31b node DatanodeRegistration(127.0.0.1:36355, datanodeUuid=f15140a7-f1a3-48f1-9fe1-47efe4e38024, infoPort=44949, infoSecurePort=0, ipcPort=37815, storageInfo=lv=-57;cid=testClusterID;nsid=1203794472;c=1689916485425), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 05:14:46,182 DEBUG [Listener at localhost/37815] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c 2023-07-21 05:14:46,184 INFO [Listener at localhost/37815] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/zookeeper_0, clientPort=53364, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 05:14:46,185 INFO [Listener at localhost/37815] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=53364 2023-07-21 05:14:46,186 INFO [Listener at localhost/37815] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:46,187 INFO [Listener at localhost/37815] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:46,202 INFO [Listener at localhost/37815] util.FSUtils(471): Created version file at hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f with version=8 2023-07-21 05:14:46,202 INFO [Listener at localhost/37815] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:38517/user/jenkins/test-data/fbb1d392-f8e8-764f-6c84-f96f11c3edcf/hbase-staging 2023-07-21 05:14:46,203 DEBUG [Listener at localhost/37815] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 05:14:46,204 DEBUG [Listener at localhost/37815] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 05:14:46,204 DEBUG [Listener at localhost/37815] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 05:14:46,204 DEBUG [Listener at localhost/37815] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 05:14:46,205 INFO [Listener at localhost/37815] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 05:14:46,205 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:46,205 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:46,205 INFO [Listener at localhost/37815] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 05:14:46,205 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:46,206 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 05:14:46,206 INFO [Listener at localhost/37815] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 05:14:46,207 INFO [Listener at localhost/37815] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41465 2023-07-21 05:14:46,208 INFO [Listener at localhost/37815] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:46,209 INFO [Listener at localhost/37815] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:46,209 INFO [Listener at localhost/37815] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41465 connecting to ZooKeeper ensemble=127.0.0.1:53364 2023-07-21 05:14:46,217 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:414650x0, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 05:14:46,218 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41465-0x101864db52b0000 connected 2023-07-21 05:14:46,231 DEBUG [Listener at localhost/37815] zookeeper.ZKUtil(164): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 05:14:46,231 DEBUG [Listener at localhost/37815] zookeeper.ZKUtil(164): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:46,231 DEBUG [Listener at localhost/37815] zookeeper.ZKUtil(164): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 05:14:46,233 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41465 2023-07-21 05:14:46,234 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41465 2023-07-21 05:14:46,234 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41465 2023-07-21 05:14:46,234 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41465 2023-07-21 05:14:46,235 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41465 2023-07-21 05:14:46,236 INFO [Listener at localhost/37815] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 05:14:46,236 INFO [Listener at localhost/37815] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 05:14:46,236 INFO [Listener at localhost/37815] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 05:14:46,237 INFO [Listener at localhost/37815] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 05:14:46,237 INFO [Listener at localhost/37815] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 05:14:46,237 INFO [Listener at localhost/37815] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 05:14:46,237 INFO [Listener at localhost/37815] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 05:14:46,238 INFO [Listener at localhost/37815] http.HttpServer(1146): Jetty bound to port 44723 2023-07-21 05:14:46,238 INFO [Listener at localhost/37815] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 05:14:46,242 INFO [Listener at localhost/37815] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:46,242 INFO [Listener at localhost/37815] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@26442371{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/hadoop.log.dir/,AVAILABLE} 2023-07-21 05:14:46,242 INFO [Listener at localhost/37815] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:46,242 INFO [Listener at localhost/37815] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@66ba58da{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-21 05:14:46,248 INFO [Listener at localhost/37815] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 05:14:46,248 INFO [Listener at localhost/37815] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 05:14:46,248 INFO [Listener at localhost/37815] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 05:14:46,249 INFO [Listener at localhost/37815] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 05:14:46,250 INFO [Listener at localhost/37815] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:46,250 INFO [Listener at localhost/37815] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7582803a{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-21 05:14:46,251 INFO [Listener at localhost/37815] server.AbstractConnector(333): Started ServerConnector@17bb529f{HTTP/1.1, (http/1.1)}{0.0.0.0:44723} 2023-07-21 05:14:46,252 INFO [Listener at localhost/37815] server.Server(415): Started @43374ms 2023-07-21 05:14:46,252 INFO [Listener at localhost/37815] master.HMaster(444): hbase.rootdir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f, hbase.cluster.distributed=false 2023-07-21 05:14:46,266 INFO [Listener at localhost/37815] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 05:14:46,266 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:46,266 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:46,266 INFO [Listener at localhost/37815] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 05:14:46,266 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:46,266 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 05:14:46,267 INFO [Listener at localhost/37815] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 05:14:46,267 INFO [Listener at localhost/37815] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37839 2023-07-21 05:14:46,268 INFO [Listener at localhost/37815] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 05:14:46,269 DEBUG [Listener at localhost/37815] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 05:14:46,269 INFO [Listener at localhost/37815] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:46,270 INFO [Listener at localhost/37815] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:46,271 INFO [Listener at localhost/37815] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37839 connecting to ZooKeeper ensemble=127.0.0.1:53364 2023-07-21 05:14:46,278 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:378390x0, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 05:14:46,279 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37839-0x101864db52b0001 connected 2023-07-21 05:14:46,279 DEBUG [Listener at localhost/37815] zookeeper.ZKUtil(164): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 05:14:46,279 DEBUG [Listener at localhost/37815] zookeeper.ZKUtil(164): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:46,280 DEBUG [Listener at localhost/37815] zookeeper.ZKUtil(164): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 05:14:46,280 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37839 2023-07-21 05:14:46,281 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37839 2023-07-21 05:14:46,282 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37839 2023-07-21 05:14:46,285 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37839 2023-07-21 05:14:46,285 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37839 2023-07-21 05:14:46,286 INFO [Listener at localhost/37815] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 05:14:46,287 INFO [Listener at localhost/37815] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 05:14:46,287 INFO [Listener at localhost/37815] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 05:14:46,287 INFO [Listener at localhost/37815] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 05:14:46,287 INFO [Listener at localhost/37815] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 05:14:46,287 INFO [Listener at localhost/37815] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 05:14:46,287 INFO [Listener at localhost/37815] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 05:14:46,288 INFO [Listener at localhost/37815] http.HttpServer(1146): Jetty bound to port 35115 2023-07-21 05:14:46,288 INFO [Listener at localhost/37815] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 05:14:46,289 INFO [Listener at localhost/37815] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:46,289 INFO [Listener at localhost/37815] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1ee8cb98{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/hadoop.log.dir/,AVAILABLE} 2023-07-21 05:14:46,290 INFO [Listener at localhost/37815] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:46,290 INFO [Listener at localhost/37815] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2f5a8a4b{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-21 05:14:46,296 INFO [Listener at localhost/37815] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 05:14:46,297 INFO [Listener at localhost/37815] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 05:14:46,297 INFO [Listener at localhost/37815] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 05:14:46,297 INFO [Listener at localhost/37815] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 05:14:46,298 INFO [Listener at localhost/37815] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:46,299 INFO [Listener at localhost/37815] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3119830f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:46,300 INFO [Listener at localhost/37815] server.AbstractConnector(333): Started ServerConnector@5a351aef{HTTP/1.1, (http/1.1)}{0.0.0.0:35115} 2023-07-21 05:14:46,300 INFO [Listener at localhost/37815] server.Server(415): Started @43423ms 2023-07-21 05:14:46,312 INFO [Listener at localhost/37815] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 05:14:46,313 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:46,313 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:46,313 INFO [Listener at localhost/37815] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 05:14:46,313 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:46,313 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 05:14:46,313 INFO [Listener at localhost/37815] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 05:14:46,315 INFO [Listener at localhost/37815] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36711 2023-07-21 05:14:46,315 INFO [Listener at localhost/37815] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 05:14:46,316 DEBUG [Listener at localhost/37815] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 05:14:46,316 INFO [Listener at localhost/37815] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:46,317 INFO [Listener at localhost/37815] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:46,318 INFO [Listener at localhost/37815] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36711 connecting to ZooKeeper ensemble=127.0.0.1:53364 2023-07-21 05:14:46,321 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:367110x0, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 05:14:46,323 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36711-0x101864db52b0002 connected 2023-07-21 05:14:46,323 DEBUG [Listener at localhost/37815] zookeeper.ZKUtil(164): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 05:14:46,323 DEBUG [Listener at localhost/37815] zookeeper.ZKUtil(164): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:46,324 DEBUG [Listener at localhost/37815] zookeeper.ZKUtil(164): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 05:14:46,326 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36711 2023-07-21 05:14:46,327 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36711 2023-07-21 05:14:46,329 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36711 2023-07-21 05:14:46,329 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36711 2023-07-21 05:14:46,329 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36711 2023-07-21 05:14:46,331 INFO [Listener at localhost/37815] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 05:14:46,331 INFO [Listener at localhost/37815] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 05:14:46,331 INFO [Listener at localhost/37815] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 05:14:46,331 INFO [Listener at localhost/37815] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 05:14:46,332 INFO [Listener at localhost/37815] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 05:14:46,332 INFO [Listener at localhost/37815] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 05:14:46,332 INFO [Listener at localhost/37815] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 05:14:46,332 INFO [Listener at localhost/37815] http.HttpServer(1146): Jetty bound to port 46225 2023-07-21 05:14:46,332 INFO [Listener at localhost/37815] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 05:14:46,335 INFO [Listener at localhost/37815] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:46,335 INFO [Listener at localhost/37815] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2bc92e96{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/hadoop.log.dir/,AVAILABLE} 2023-07-21 05:14:46,336 INFO [Listener at localhost/37815] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:46,336 INFO [Listener at localhost/37815] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@59d4cd27{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-21 05:14:46,340 INFO [Listener at localhost/37815] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 05:14:46,340 INFO [Listener at localhost/37815] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 05:14:46,341 INFO [Listener at localhost/37815] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 05:14:46,341 INFO [Listener at localhost/37815] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 05:14:46,342 INFO [Listener at localhost/37815] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:46,343 INFO [Listener at localhost/37815] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4e6eccad{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:46,345 INFO [Listener at localhost/37815] server.AbstractConnector(333): Started ServerConnector@28b0b21b{HTTP/1.1, (http/1.1)}{0.0.0.0:46225} 2023-07-21 05:14:46,345 INFO [Listener at localhost/37815] server.Server(415): Started @43468ms 2023-07-21 05:14:46,356 INFO [Listener at localhost/37815] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 05:14:46,356 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:46,356 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:46,356 INFO [Listener at localhost/37815] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 05:14:46,356 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:46,356 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 05:14:46,356 INFO [Listener at localhost/37815] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 05:14:46,358 INFO [Listener at localhost/37815] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36007 2023-07-21 05:14:46,358 INFO [Listener at localhost/37815] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 05:14:46,359 DEBUG [Listener at localhost/37815] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 05:14:46,360 INFO [Listener at localhost/37815] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:46,361 INFO [Listener at localhost/37815] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:46,361 INFO [Listener at localhost/37815] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36007 connecting to ZooKeeper ensemble=127.0.0.1:53364 2023-07-21 05:14:46,364 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:360070x0, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 05:14:46,365 DEBUG [Listener at localhost/37815] zookeeper.ZKUtil(164): regionserver:360070x0, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 05:14:46,366 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36007-0x101864db52b0003 connected 2023-07-21 05:14:46,366 DEBUG [Listener at localhost/37815] zookeeper.ZKUtil(164): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:46,366 DEBUG [Listener at localhost/37815] zookeeper.ZKUtil(164): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 05:14:46,369 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36007 2023-07-21 05:14:46,370 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36007 2023-07-21 05:14:46,370 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36007 2023-07-21 05:14:46,370 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36007 2023-07-21 05:14:46,370 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36007 2023-07-21 05:14:46,372 INFO [Listener at localhost/37815] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 05:14:46,372 INFO [Listener at localhost/37815] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 05:14:46,372 INFO [Listener at localhost/37815] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 05:14:46,373 INFO [Listener at localhost/37815] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 05:14:46,373 INFO [Listener at localhost/37815] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 05:14:46,373 INFO [Listener at localhost/37815] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 05:14:46,373 INFO [Listener at localhost/37815] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 05:14:46,373 INFO [Listener at localhost/37815] http.HttpServer(1146): Jetty bound to port 37567 2023-07-21 05:14:46,374 INFO [Listener at localhost/37815] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 05:14:46,375 INFO [Listener at localhost/37815] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:46,375 INFO [Listener at localhost/37815] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1ec14bc3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/hadoop.log.dir/,AVAILABLE} 2023-07-21 05:14:46,375 INFO [Listener at localhost/37815] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:46,375 INFO [Listener at localhost/37815] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@9f1cc1a{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-21 05:14:46,380 INFO [Listener at localhost/37815] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 05:14:46,381 INFO [Listener at localhost/37815] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 05:14:46,381 INFO [Listener at localhost/37815] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 05:14:46,381 INFO [Listener at localhost/37815] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 05:14:46,382 INFO [Listener at localhost/37815] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:46,383 INFO [Listener at localhost/37815] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@373f217{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:46,384 INFO [Listener at localhost/37815] server.AbstractConnector(333): Started ServerConnector@46a2470b{HTTP/1.1, (http/1.1)}{0.0.0.0:37567} 2023-07-21 05:14:46,385 INFO [Listener at localhost/37815] server.Server(415): Started @43507ms 2023-07-21 05:14:46,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 05:14:46,391 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@7a6873aa{HTTP/1.1, (http/1.1)}{0.0.0.0:39393} 2023-07-21 05:14:46,391 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @43514ms 2023-07-21 05:14:46,391 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,41465,1689916486204 2023-07-21 05:14:46,393 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 05:14:46,393 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,41465,1689916486204 2023-07-21 05:14:46,400 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 05:14:46,400 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 05:14:46,400 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 05:14:46,400 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 05:14:46,401 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:46,402 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 05:14:46,404 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,41465,1689916486204 from backup master directory 2023-07-21 05:14:46,404 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 05:14:46,405 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,41465,1689916486204 2023-07-21 05:14:46,405 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 05:14:46,405 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 05:14:46,405 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,41465,1689916486204 2023-07-21 05:14:46,419 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/hbase.id with ID: c9ee95bd-ab56-4466-b217-9076a3b1615d 2023-07-21 05:14:46,431 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:46,434 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:46,443 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7dd65c46 to 127.0.0.1:53364 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:46,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1e6f5963, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:46,447 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:46,448 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 05:14:46,448 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:46,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/MasterData/data/master/store-tmp 2023-07-21 05:14:46,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:46,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 05:14:46,458 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:46,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:46,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 05:14:46,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:46,458 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:46,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 05:14:46,459 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/MasterData/WALs/jenkins-hbase4.apache.org,41465,1689916486204 2023-07-21 05:14:46,461 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41465%2C1689916486204, suffix=, logDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/MasterData/WALs/jenkins-hbase4.apache.org,41465,1689916486204, archiveDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/MasterData/oldWALs, maxLogs=10 2023-07-21 05:14:46,479 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36759,DS-860d6c57-d735-4cdf-9619-83aac57320ef,DISK] 2023-07-21 05:14:46,479 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44725,DS-6bfeb7ff-bbb3-492b-96da-3baf630cebf8,DISK] 2023-07-21 05:14:46,479 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36355,DS-1b56bf0b-21b4-496a-8b90-ec8643561175,DISK] 2023-07-21 05:14:46,481 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/MasterData/WALs/jenkins-hbase4.apache.org,41465,1689916486204/jenkins-hbase4.apache.org%2C41465%2C1689916486204.1689916486461 2023-07-21 05:14:46,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36355,DS-1b56bf0b-21b4-496a-8b90-ec8643561175,DISK], DatanodeInfoWithStorage[127.0.0.1:36759,DS-860d6c57-d735-4cdf-9619-83aac57320ef,DISK], DatanodeInfoWithStorage[127.0.0.1:44725,DS-6bfeb7ff-bbb3-492b-96da-3baf630cebf8,DISK]] 2023-07-21 05:14:46,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:46,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:46,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:46,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:46,487 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:46,488 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 05:14:46,489 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 05:14:46,489 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:46,490 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:46,490 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:46,493 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 05:14:46,495 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:46,495 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11322480960, jitterRate=0.05448821187019348}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:46,495 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 05:14:46,496 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 05:14:46,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 05:14:46,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 05:14:46,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 05:14:46,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-21 05:14:46,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-21 05:14:46,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 05:14:46,499 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 05:14:46,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 05:14:46,501 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 05:14:46,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 05:14:46,501 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 05:14:46,505 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:46,505 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 05:14:46,505 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 05:14:46,506 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 05:14:46,508 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:46,508 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:46,508 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:46,508 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:46,508 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:46,511 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,41465,1689916486204, sessionid=0x101864db52b0000, setting cluster-up flag (Was=false) 2023-07-21 05:14:46,513 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:46,518 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 05:14:46,519 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41465,1689916486204 2023-07-21 05:14:46,523 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:46,527 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 05:14:46,528 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41465,1689916486204 2023-07-21 05:14:46,529 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.hbase-snapshot/.tmp 2023-07-21 05:14:46,529 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 05:14:46,529 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 05:14:46,530 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 05:14:46,531 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41465,1689916486204] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 05:14:46,531 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 05:14:46,532 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 05:14:46,548 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 05:14:46,548 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 05:14:46,548 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 05:14:46,548 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 05:14:46,548 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 05:14:46,548 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 05:14:46,548 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 05:14:46,548 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 05:14:46,549 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-21 05:14:46,549 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,549 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 05:14:46,549 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,553 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689916516553 2023-07-21 05:14:46,553 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 05:14:46,553 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 05:14:46,553 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 05:14:46,553 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 05:14:46,553 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 05:14:46,553 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 05:14:46,553 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,553 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 05:14:46,554 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 05:14:46,554 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 05:14:46,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 05:14:46,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 05:14:46,555 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:46,559 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 05:14:46,559 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 05:14:46,562 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689916486559,5,FailOnTimeoutGroup] 2023-07-21 05:14:46,562 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689916486562,5,FailOnTimeoutGroup] 2023-07-21 05:14:46,562 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 05:14:46,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,573 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:46,573 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:46,573 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f 2023-07-21 05:14:46,587 INFO [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer(951): ClusterId : c9ee95bd-ab56-4466-b217-9076a3b1615d 2023-07-21 05:14:46,591 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:46,592 DEBUG [RS:0;jenkins-hbase4:37839] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 05:14:46,594 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 05:14:46,596 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/info 2023-07-21 05:14:46,596 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 05:14:46,596 INFO [RS:1;jenkins-hbase4:36711] regionserver.HRegionServer(951): ClusterId : c9ee95bd-ab56-4466-b217-9076a3b1615d 2023-07-21 05:14:46,596 INFO [RS:2;jenkins-hbase4:36007] regionserver.HRegionServer(951): ClusterId : c9ee95bd-ab56-4466-b217-9076a3b1615d 2023-07-21 05:14:46,598 DEBUG [RS:1;jenkins-hbase4:36711] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 05:14:46,599 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:46,599 DEBUG [RS:2;jenkins-hbase4:36007] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 05:14:46,599 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 05:14:46,601 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/rep_barrier 2023-07-21 05:14:46,601 DEBUG [RS:0;jenkins-hbase4:37839] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 05:14:46,601 DEBUG [RS:0;jenkins-hbase4:37839] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 05:14:46,601 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 05:14:46,602 DEBUG [RS:1;jenkins-hbase4:36711] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 05:14:46,602 DEBUG [RS:1;jenkins-hbase4:36711] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 05:14:46,602 DEBUG [RS:2;jenkins-hbase4:36007] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 05:14:46,602 DEBUG [RS:2;jenkins-hbase4:36007] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 05:14:46,602 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:46,602 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 05:14:46,604 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/table 2023-07-21 05:14:46,604 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 05:14:46,605 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:46,605 DEBUG [RS:1;jenkins-hbase4:36711] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 05:14:46,605 DEBUG [RS:2;jenkins-hbase4:36007] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 05:14:46,607 DEBUG [RS:0;jenkins-hbase4:37839] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 05:14:46,609 DEBUG [RS:2;jenkins-hbase4:36007] zookeeper.ReadOnlyZKClient(139): Connect 0x3a3e2c2d to 127.0.0.1:53364 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:46,610 DEBUG [RS:0;jenkins-hbase4:37839] zookeeper.ReadOnlyZKClient(139): Connect 0x5ce75a9c to 127.0.0.1:53364 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:46,610 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740 2023-07-21 05:14:46,611 DEBUG [RS:1;jenkins-hbase4:36711] zookeeper.ReadOnlyZKClient(139): Connect 0x0121ab8d to 127.0.0.1:53364 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:46,613 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740 2023-07-21 05:14:46,620 DEBUG [RS:2;jenkins-hbase4:36007] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@34f9c51a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:46,620 DEBUG [RS:2;jenkins-hbase4:36007] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@30c6475d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 05:14:46,621 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 05:14:46,624 DEBUG [RS:0;jenkins-hbase4:37839] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@19e5df64, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:46,624 DEBUG [RS:0;jenkins-hbase4:37839] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@365a1de5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 05:14:46,624 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 05:14:46,634 DEBUG [RS:1;jenkins-hbase4:36711] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@15deee1a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:46,634 DEBUG [RS:1;jenkins-hbase4:36711] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@306aec06, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 05:14:46,635 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:46,635 DEBUG [RS:2;jenkins-hbase4:36007] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:36007 2023-07-21 05:14:46,635 DEBUG [RS:0;jenkins-hbase4:37839] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:37839 2023-07-21 05:14:46,635 INFO [RS:2;jenkins-hbase4:36007] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 05:14:46,635 INFO [RS:0;jenkins-hbase4:37839] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 05:14:46,636 INFO [RS:0;jenkins-hbase4:37839] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 05:14:46,635 INFO [RS:2;jenkins-hbase4:36007] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 05:14:46,636 DEBUG [RS:2;jenkins-hbase4:36007] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 05:14:46,636 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11129765440, jitterRate=0.03654018044471741}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 05:14:46,636 DEBUG [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 05:14:46,636 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 05:14:46,636 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 05:14:46,636 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 05:14:46,636 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 05:14:46,636 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 05:14:46,636 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 05:14:46,636 INFO [RS:2;jenkins-hbase4:36007] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41465,1689916486204 with isa=jenkins-hbase4.apache.org/172.31.14.131:36007, startcode=1689916486355 2023-07-21 05:14:46,636 INFO [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41465,1689916486204 with isa=jenkins-hbase4.apache.org/172.31.14.131:37839, startcode=1689916486265 2023-07-21 05:14:46,636 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 05:14:46,636 DEBUG [RS:0;jenkins-hbase4:37839] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 05:14:46,636 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 05:14:46,636 DEBUG [RS:2;jenkins-hbase4:36007] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 05:14:46,638 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 05:14:46,638 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 05:14:46,638 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 05:14:46,638 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36333, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 05:14:46,642 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 05:14:46,644 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41465] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36007,1689916486355 2023-07-21 05:14:46,644 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41465,1689916486204] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 05:14:46,645 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41465,1689916486204] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 05:14:46,645 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42965, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 05:14:46,645 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 05:14:46,646 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41465] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:46,646 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41465,1689916486204] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 05:14:46,646 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41465,1689916486204] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 05:14:46,646 DEBUG [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f 2023-07-21 05:14:46,646 DEBUG [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35849 2023-07-21 05:14:46,646 DEBUG [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44723 2023-07-21 05:14:46,646 DEBUG [RS:2;jenkins-hbase4:36007] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f 2023-07-21 05:14:46,646 DEBUG [RS:2;jenkins-hbase4:36007] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35849 2023-07-21 05:14:46,647 DEBUG [RS:2;jenkins-hbase4:36007] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44723 2023-07-21 05:14:46,648 DEBUG [RS:1;jenkins-hbase4:36711] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:36711 2023-07-21 05:14:46,648 INFO [RS:1;jenkins-hbase4:36711] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 05:14:46,648 INFO [RS:1;jenkins-hbase4:36711] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 05:14:46,648 DEBUG [RS:1;jenkins-hbase4:36711] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 05:14:46,649 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:46,652 DEBUG [RS:0;jenkins-hbase4:37839] zookeeper.ZKUtil(162): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:46,652 INFO [RS:1;jenkins-hbase4:36711] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41465,1689916486204 with isa=jenkins-hbase4.apache.org/172.31.14.131:36711, startcode=1689916486312 2023-07-21 05:14:46,652 WARN [RS:0;jenkins-hbase4:37839] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 05:14:46,652 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37839,1689916486265] 2023-07-21 05:14:46,652 INFO [RS:0;jenkins-hbase4:37839] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:46,653 DEBUG [RS:2;jenkins-hbase4:36007] zookeeper.ZKUtil(162): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36007,1689916486355 2023-07-21 05:14:46,653 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36007,1689916486355] 2023-07-21 05:14:46,652 DEBUG [RS:1;jenkins-hbase4:36711] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 05:14:46,653 WARN [RS:2;jenkins-hbase4:36007] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 05:14:46,653 DEBUG [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/WALs/jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:46,653 INFO [RS:2;jenkins-hbase4:36007] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:46,653 DEBUG [RS:2;jenkins-hbase4:36007] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/WALs/jenkins-hbase4.apache.org,36007,1689916486355 2023-07-21 05:14:46,656 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34447, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 05:14:46,658 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41465] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36711,1689916486312 2023-07-21 05:14:46,658 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41465,1689916486204] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 05:14:46,658 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41465,1689916486204] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 05:14:46,659 DEBUG [RS:1;jenkins-hbase4:36711] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f 2023-07-21 05:14:46,659 DEBUG [RS:1;jenkins-hbase4:36711] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35849 2023-07-21 05:14:46,660 DEBUG [RS:1;jenkins-hbase4:36711] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44723 2023-07-21 05:14:46,661 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:46,661 DEBUG [RS:1;jenkins-hbase4:36711] zookeeper.ZKUtil(162): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36711,1689916486312 2023-07-21 05:14:46,661 WARN [RS:1;jenkins-hbase4:36711] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 05:14:46,661 DEBUG [RS:0;jenkins-hbase4:37839] zookeeper.ZKUtil(162): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36007,1689916486355 2023-07-21 05:14:46,661 INFO [RS:1;jenkins-hbase4:36711] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:46,661 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36711,1689916486312] 2023-07-21 05:14:46,661 DEBUG [RS:2;jenkins-hbase4:36007] zookeeper.ZKUtil(162): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36007,1689916486355 2023-07-21 05:14:46,661 DEBUG [RS:0;jenkins-hbase4:37839] zookeeper.ZKUtil(162): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:46,662 DEBUG [RS:1;jenkins-hbase4:36711] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/WALs/jenkins-hbase4.apache.org,36711,1689916486312 2023-07-21 05:14:46,662 DEBUG [RS:2;jenkins-hbase4:36007] zookeeper.ZKUtil(162): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:46,662 DEBUG [RS:0;jenkins-hbase4:37839] zookeeper.ZKUtil(162): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36711,1689916486312 2023-07-21 05:14:46,662 DEBUG [RS:2;jenkins-hbase4:36007] zookeeper.ZKUtil(162): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36711,1689916486312 2023-07-21 05:14:46,663 DEBUG [RS:0;jenkins-hbase4:37839] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 05:14:46,663 INFO [RS:0;jenkins-hbase4:37839] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 05:14:46,663 DEBUG [RS:2;jenkins-hbase4:36007] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 05:14:46,665 DEBUG [RS:1;jenkins-hbase4:36711] zookeeper.ZKUtil(162): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36007,1689916486355 2023-07-21 05:14:46,665 INFO [RS:2;jenkins-hbase4:36007] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 05:14:46,665 INFO [RS:0;jenkins-hbase4:37839] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 05:14:46,665 DEBUG [RS:1;jenkins-hbase4:36711] zookeeper.ZKUtil(162): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:46,665 INFO [RS:0;jenkins-hbase4:37839] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 05:14:46,665 INFO [RS:0;jenkins-hbase4:37839] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,665 DEBUG [RS:1;jenkins-hbase4:36711] zookeeper.ZKUtil(162): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36711,1689916486312 2023-07-21 05:14:46,666 INFO [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 05:14:46,666 DEBUG [RS:1;jenkins-hbase4:36711] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 05:14:46,666 INFO [RS:1;jenkins-hbase4:36711] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 05:14:46,667 INFO [RS:2;jenkins-hbase4:36007] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 05:14:46,667 INFO [RS:2;jenkins-hbase4:36007] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 05:14:46,667 INFO [RS:2;jenkins-hbase4:36007] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,672 INFO [RS:2;jenkins-hbase4:36007] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 05:14:46,672 INFO [RS:0;jenkins-hbase4:37839] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,674 INFO [RS:1;jenkins-hbase4:36711] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 05:14:46,675 DEBUG [RS:0;jenkins-hbase4:37839] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,675 DEBUG [RS:0;jenkins-hbase4:37839] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,675 DEBUG [RS:0;jenkins-hbase4:37839] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,675 DEBUG [RS:0;jenkins-hbase4:37839] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,675 DEBUG [RS:0;jenkins-hbase4:37839] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,675 DEBUG [RS:0;jenkins-hbase4:37839] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 05:14:46,675 DEBUG [RS:0;jenkins-hbase4:37839] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,675 DEBUG [RS:0;jenkins-hbase4:37839] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,675 DEBUG [RS:0;jenkins-hbase4:37839] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,675 DEBUG [RS:0;jenkins-hbase4:37839] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,676 INFO [RS:1;jenkins-hbase4:36711] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 05:14:46,676 INFO [RS:1;jenkins-hbase4:36711] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,679 INFO [RS:2;jenkins-hbase4:36007] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,679 INFO [RS:1;jenkins-hbase4:36711] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 05:14:46,679 DEBUG [RS:2;jenkins-hbase4:36007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,679 DEBUG [RS:2;jenkins-hbase4:36007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,679 INFO [RS:0;jenkins-hbase4:37839] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,679 DEBUG [RS:2;jenkins-hbase4:36007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,679 INFO [RS:0;jenkins-hbase4:37839] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,680 DEBUG [RS:2;jenkins-hbase4:36007] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,680 INFO [RS:0;jenkins-hbase4:37839] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,680 DEBUG [RS:2;jenkins-hbase4:36007] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,680 DEBUG [RS:2;jenkins-hbase4:36007] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 05:14:46,680 DEBUG [RS:2;jenkins-hbase4:36007] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,680 DEBUG [RS:2;jenkins-hbase4:36007] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,680 DEBUG [RS:2;jenkins-hbase4:36007] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,680 DEBUG [RS:2;jenkins-hbase4:36007] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,681 INFO [RS:1;jenkins-hbase4:36711] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,681 DEBUG [RS:1;jenkins-hbase4:36711] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,681 INFO [RS:2;jenkins-hbase4:36007] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,682 DEBUG [RS:1;jenkins-hbase4:36711] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,682 INFO [RS:2;jenkins-hbase4:36007] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,683 DEBUG [RS:1;jenkins-hbase4:36711] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,683 INFO [RS:2;jenkins-hbase4:36007] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,683 DEBUG [RS:1;jenkins-hbase4:36711] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,683 DEBUG [RS:1;jenkins-hbase4:36711] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,683 DEBUG [RS:1;jenkins-hbase4:36711] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 05:14:46,683 DEBUG [RS:1;jenkins-hbase4:36711] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,683 DEBUG [RS:1;jenkins-hbase4:36711] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,683 DEBUG [RS:1;jenkins-hbase4:36711] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,683 DEBUG [RS:1;jenkins-hbase4:36711] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:46,692 INFO [RS:1;jenkins-hbase4:36711] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,692 INFO [RS:1;jenkins-hbase4:36711] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,693 INFO [RS:1;jenkins-hbase4:36711] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,695 INFO [RS:0;jenkins-hbase4:37839] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 05:14:46,695 INFO [RS:0;jenkins-hbase4:37839] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37839,1689916486265-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,701 INFO [RS:2;jenkins-hbase4:36007] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 05:14:46,701 INFO [RS:2;jenkins-hbase4:36007] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36007,1689916486355-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,703 INFO [RS:1;jenkins-hbase4:36711] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 05:14:46,703 INFO [RS:1;jenkins-hbase4:36711] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36711,1689916486312-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:46,706 INFO [RS:0;jenkins-hbase4:37839] regionserver.Replication(203): jenkins-hbase4.apache.org,37839,1689916486265 started 2023-07-21 05:14:46,706 INFO [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37839,1689916486265, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37839, sessionid=0x101864db52b0001 2023-07-21 05:14:46,707 DEBUG [RS:0;jenkins-hbase4:37839] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 05:14:46,707 DEBUG [RS:0;jenkins-hbase4:37839] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:46,707 DEBUG [RS:0;jenkins-hbase4:37839] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37839,1689916486265' 2023-07-21 05:14:46,707 DEBUG [RS:0;jenkins-hbase4:37839] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 05:14:46,707 DEBUG [RS:0;jenkins-hbase4:37839] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 05:14:46,707 DEBUG [RS:0;jenkins-hbase4:37839] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 05:14:46,707 DEBUG [RS:0;jenkins-hbase4:37839] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 05:14:46,707 DEBUG [RS:0;jenkins-hbase4:37839] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:46,707 DEBUG [RS:0;jenkins-hbase4:37839] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37839,1689916486265' 2023-07-21 05:14:46,707 DEBUG [RS:0;jenkins-hbase4:37839] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 05:14:46,708 DEBUG [RS:0;jenkins-hbase4:37839] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 05:14:46,708 DEBUG [RS:0;jenkins-hbase4:37839] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 05:14:46,708 INFO [RS:0;jenkins-hbase4:37839] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 05:14:46,708 INFO [RS:0;jenkins-hbase4:37839] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 05:14:46,714 INFO [RS:2;jenkins-hbase4:36007] regionserver.Replication(203): jenkins-hbase4.apache.org,36007,1689916486355 started 2023-07-21 05:14:46,715 INFO [RS:2;jenkins-hbase4:36007] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36007,1689916486355, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36007, sessionid=0x101864db52b0003 2023-07-21 05:14:46,715 DEBUG [RS:2;jenkins-hbase4:36007] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 05:14:46,715 DEBUG [RS:2;jenkins-hbase4:36007] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36007,1689916486355 2023-07-21 05:14:46,715 DEBUG [RS:2;jenkins-hbase4:36007] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36007,1689916486355' 2023-07-21 05:14:46,715 DEBUG [RS:2;jenkins-hbase4:36007] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 05:14:46,715 INFO [RS:1;jenkins-hbase4:36711] regionserver.Replication(203): jenkins-hbase4.apache.org,36711,1689916486312 started 2023-07-21 05:14:46,715 INFO [RS:1;jenkins-hbase4:36711] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36711,1689916486312, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36711, sessionid=0x101864db52b0002 2023-07-21 05:14:46,715 DEBUG [RS:1;jenkins-hbase4:36711] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 05:14:46,715 DEBUG [RS:1;jenkins-hbase4:36711] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36711,1689916486312 2023-07-21 05:14:46,715 DEBUG [RS:1;jenkins-hbase4:36711] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36711,1689916486312' 2023-07-21 05:14:46,715 DEBUG [RS:1;jenkins-hbase4:36711] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 05:14:46,715 DEBUG [RS:2;jenkins-hbase4:36007] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 05:14:46,715 DEBUG [RS:1;jenkins-hbase4:36711] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 05:14:46,715 DEBUG [RS:2;jenkins-hbase4:36007] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 05:14:46,715 DEBUG [RS:2;jenkins-hbase4:36007] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 05:14:46,715 DEBUG [RS:2;jenkins-hbase4:36007] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36007,1689916486355 2023-07-21 05:14:46,715 DEBUG [RS:2;jenkins-hbase4:36007] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36007,1689916486355' 2023-07-21 05:14:46,715 DEBUG [RS:2;jenkins-hbase4:36007] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 05:14:46,716 DEBUG [RS:1;jenkins-hbase4:36711] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 05:14:46,716 DEBUG [RS:1;jenkins-hbase4:36711] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 05:14:46,716 DEBUG [RS:1;jenkins-hbase4:36711] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36711,1689916486312 2023-07-21 05:14:46,716 DEBUG [RS:1;jenkins-hbase4:36711] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36711,1689916486312' 2023-07-21 05:14:46,716 DEBUG [RS:1;jenkins-hbase4:36711] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 05:14:46,716 DEBUG [RS:2;jenkins-hbase4:36007] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 05:14:46,716 DEBUG [RS:1;jenkins-hbase4:36711] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 05:14:46,716 DEBUG [RS:2;jenkins-hbase4:36007] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 05:14:46,716 INFO [RS:2;jenkins-hbase4:36007] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 05:14:46,716 INFO [RS:2;jenkins-hbase4:36007] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 05:14:46,716 DEBUG [RS:1;jenkins-hbase4:36711] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 05:14:46,716 INFO [RS:1;jenkins-hbase4:36711] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 05:14:46,716 INFO [RS:1;jenkins-hbase4:36711] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 05:14:46,796 DEBUG [jenkins-hbase4:41465] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 05:14:46,796 DEBUG [jenkins-hbase4:41465] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:46,796 DEBUG [jenkins-hbase4:41465] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:46,796 DEBUG [jenkins-hbase4:41465] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:46,796 DEBUG [jenkins-hbase4:41465] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:46,796 DEBUG [jenkins-hbase4:41465] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:46,798 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36007,1689916486355, state=OPENING 2023-07-21 05:14:46,800 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 05:14:46,801 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:46,801 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36007,1689916486355}] 2023-07-21 05:14:46,801 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 05:14:46,810 INFO [RS:0;jenkins-hbase4:37839] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37839%2C1689916486265, suffix=, logDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/WALs/jenkins-hbase4.apache.org,37839,1689916486265, archiveDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/oldWALs, maxLogs=32 2023-07-21 05:14:46,818 INFO [RS:2;jenkins-hbase4:36007] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36007%2C1689916486355, suffix=, logDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/WALs/jenkins-hbase4.apache.org,36007,1689916486355, archiveDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/oldWALs, maxLogs=32 2023-07-21 05:14:46,818 INFO [RS:1;jenkins-hbase4:36711] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36711%2C1689916486312, suffix=, logDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/WALs/jenkins-hbase4.apache.org,36711,1689916486312, archiveDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/oldWALs, maxLogs=32 2023-07-21 05:14:46,836 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36759,DS-860d6c57-d735-4cdf-9619-83aac57320ef,DISK] 2023-07-21 05:14:46,836 WARN [ReadOnlyZKClient-127.0.0.1:53364@0x7dd65c46] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 05:14:46,837 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41465,1689916486204] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 05:14:46,841 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36355,DS-1b56bf0b-21b4-496a-8b90-ec8643561175,DISK] 2023-07-21 05:14:46,841 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44725,DS-6bfeb7ff-bbb3-492b-96da-3baf630cebf8,DISK] 2023-07-21 05:14:46,842 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34298, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 05:14:46,843 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36007] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:34298 deadline: 1689916546843, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,36007,1689916486355 2023-07-21 05:14:46,852 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36355,DS-1b56bf0b-21b4-496a-8b90-ec8643561175,DISK] 2023-07-21 05:14:46,852 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36759,DS-860d6c57-d735-4cdf-9619-83aac57320ef,DISK] 2023-07-21 05:14:46,852 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36759,DS-860d6c57-d735-4cdf-9619-83aac57320ef,DISK] 2023-07-21 05:14:46,857 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44725,DS-6bfeb7ff-bbb3-492b-96da-3baf630cebf8,DISK] 2023-07-21 05:14:46,858 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44725,DS-6bfeb7ff-bbb3-492b-96da-3baf630cebf8,DISK] 2023-07-21 05:14:46,859 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36355,DS-1b56bf0b-21b4-496a-8b90-ec8643561175,DISK] 2023-07-21 05:14:46,860 INFO [RS:0;jenkins-hbase4:37839] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/WALs/jenkins-hbase4.apache.org,37839,1689916486265/jenkins-hbase4.apache.org%2C37839%2C1689916486265.1689916486810 2023-07-21 05:14:46,861 INFO [RS:2;jenkins-hbase4:36007] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/WALs/jenkins-hbase4.apache.org,36007,1689916486355/jenkins-hbase4.apache.org%2C36007%2C1689916486355.1689916486818 2023-07-21 05:14:46,861 DEBUG [RS:0;jenkins-hbase4:37839] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36759,DS-860d6c57-d735-4cdf-9619-83aac57320ef,DISK], DatanodeInfoWithStorage[127.0.0.1:36355,DS-1b56bf0b-21b4-496a-8b90-ec8643561175,DISK], DatanodeInfoWithStorage[127.0.0.1:44725,DS-6bfeb7ff-bbb3-492b-96da-3baf630cebf8,DISK]] 2023-07-21 05:14:46,861 DEBUG [RS:2;jenkins-hbase4:36007] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36759,DS-860d6c57-d735-4cdf-9619-83aac57320ef,DISK], DatanodeInfoWithStorage[127.0.0.1:44725,DS-6bfeb7ff-bbb3-492b-96da-3baf630cebf8,DISK], DatanodeInfoWithStorage[127.0.0.1:36355,DS-1b56bf0b-21b4-496a-8b90-ec8643561175,DISK]] 2023-07-21 05:14:46,862 INFO [RS:1;jenkins-hbase4:36711] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/WALs/jenkins-hbase4.apache.org,36711,1689916486312/jenkins-hbase4.apache.org%2C36711%2C1689916486312.1689916486818 2023-07-21 05:14:46,862 DEBUG [RS:1;jenkins-hbase4:36711] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36759,DS-860d6c57-d735-4cdf-9619-83aac57320ef,DISK], DatanodeInfoWithStorage[127.0.0.1:36355,DS-1b56bf0b-21b4-496a-8b90-ec8643561175,DISK], DatanodeInfoWithStorage[127.0.0.1:44725,DS-6bfeb7ff-bbb3-492b-96da-3baf630cebf8,DISK]] 2023-07-21 05:14:46,955 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36007,1689916486355 2023-07-21 05:14:46,957 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 05:14:46,958 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34312, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 05:14:46,962 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 05:14:46,962 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:46,963 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36007%2C1689916486355.meta, suffix=.meta, logDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/WALs/jenkins-hbase4.apache.org,36007,1689916486355, archiveDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/oldWALs, maxLogs=32 2023-07-21 05:14:46,977 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36355,DS-1b56bf0b-21b4-496a-8b90-ec8643561175,DISK] 2023-07-21 05:14:46,977 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36759,DS-860d6c57-d735-4cdf-9619-83aac57320ef,DISK] 2023-07-21 05:14:46,977 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44725,DS-6bfeb7ff-bbb3-492b-96da-3baf630cebf8,DISK] 2023-07-21 05:14:46,979 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/WALs/jenkins-hbase4.apache.org,36007,1689916486355/jenkins-hbase4.apache.org%2C36007%2C1689916486355.meta.1689916486963.meta 2023-07-21 05:14:46,979 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44725,DS-6bfeb7ff-bbb3-492b-96da-3baf630cebf8,DISK], DatanodeInfoWithStorage[127.0.0.1:36355,DS-1b56bf0b-21b4-496a-8b90-ec8643561175,DISK], DatanodeInfoWithStorage[127.0.0.1:36759,DS-860d6c57-d735-4cdf-9619-83aac57320ef,DISK]] 2023-07-21 05:14:46,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:46,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 05:14:46,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 05:14:46,980 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 05:14:46,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 05:14:46,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:46,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 05:14:46,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 05:14:46,987 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 05:14:46,989 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/info 2023-07-21 05:14:46,989 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/info 2023-07-21 05:14:46,989 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 05:14:46,990 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:46,990 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 05:14:46,991 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/rep_barrier 2023-07-21 05:14:46,991 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/rep_barrier 2023-07-21 05:14:46,992 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 05:14:46,992 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:46,992 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 05:14:46,993 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/table 2023-07-21 05:14:46,993 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/table 2023-07-21 05:14:46,994 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 05:14:46,994 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:46,995 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740 2023-07-21 05:14:46,997 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740 2023-07-21 05:14:46,999 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 05:14:47,003 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 05:14:47,004 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9741993600, jitterRate=-0.09270614385604858}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 05:14:47,004 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 05:14:47,005 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689916486955 2023-07-21 05:14:47,010 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 05:14:47,011 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 05:14:47,012 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36007,1689916486355, state=OPEN 2023-07-21 05:14:47,014 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 05:14:47,014 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 05:14:47,019 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 05:14:47,020 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36007,1689916486355 in 213 msec 2023-07-21 05:14:47,021 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 05:14:47,021 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 382 msec 2023-07-21 05:14:47,030 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 490 msec 2023-07-21 05:14:47,030 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689916487030, completionTime=-1 2023-07-21 05:14:47,030 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 05:14:47,030 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 05:14:47,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 05:14:47,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689916547035 2023-07-21 05:14:47,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689916607035 2023-07-21 05:14:47,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-21 05:14:47,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41465,1689916486204-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:47,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41465,1689916486204-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:47,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41465,1689916486204-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:47,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:41465, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:47,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:47,044 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 05:14:47,044 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:47,047 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 05:14:47,047 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 05:14:47,048 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:47,049 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 05:14:47,051 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp/data/hbase/namespace/c445ace1c5aabcf0de02aaf524278130 2023-07-21 05:14:47,052 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp/data/hbase/namespace/c445ace1c5aabcf0de02aaf524278130 empty. 2023-07-21 05:14:47,052 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp/data/hbase/namespace/c445ace1c5aabcf0de02aaf524278130 2023-07-21 05:14:47,052 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 05:14:47,067 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:47,068 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => c445ace1c5aabcf0de02aaf524278130, NAME => 'hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp 2023-07-21 05:14:47,076 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:47,077 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing c445ace1c5aabcf0de02aaf524278130, disabling compactions & flushes 2023-07-21 05:14:47,077 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130. 2023-07-21 05:14:47,077 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130. 2023-07-21 05:14:47,077 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130. after waiting 0 ms 2023-07-21 05:14:47,077 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130. 2023-07-21 05:14:47,077 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130. 2023-07-21 05:14:47,077 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for c445ace1c5aabcf0de02aaf524278130: 2023-07-21 05:14:47,079 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 05:14:47,080 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689916487080"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916487080"}]},"ts":"1689916487080"} 2023-07-21 05:14:47,083 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 05:14:47,083 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 05:14:47,083 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916487083"}]},"ts":"1689916487083"} 2023-07-21 05:14:47,084 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 05:14:47,087 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:47,087 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:47,087 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:47,087 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:47,087 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:47,087 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c445ace1c5aabcf0de02aaf524278130, ASSIGN}] 2023-07-21 05:14:47,089 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c445ace1c5aabcf0de02aaf524278130, ASSIGN 2023-07-21 05:14:47,090 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=c445ace1c5aabcf0de02aaf524278130, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37839,1689916486265; forceNewPlan=false, retain=false 2023-07-21 05:14:47,160 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41465,1689916486204] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:47,162 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41465,1689916486204] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 05:14:47,164 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:47,164 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 05:14:47,166 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp/data/hbase/rsgroup/4a7f959cb0cb6de634b4673eb2e845c3 2023-07-21 05:14:47,167 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp/data/hbase/rsgroup/4a7f959cb0cb6de634b4673eb2e845c3 empty. 2023-07-21 05:14:47,167 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp/data/hbase/rsgroup/4a7f959cb0cb6de634b4673eb2e845c3 2023-07-21 05:14:47,167 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 05:14:47,180 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:47,182 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4a7f959cb0cb6de634b4673eb2e845c3, NAME => 'hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp 2023-07-21 05:14:47,191 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:47,192 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 4a7f959cb0cb6de634b4673eb2e845c3, disabling compactions & flushes 2023-07-21 05:14:47,192 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3. 2023-07-21 05:14:47,192 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3. 2023-07-21 05:14:47,192 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3. after waiting 0 ms 2023-07-21 05:14:47,192 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3. 2023-07-21 05:14:47,192 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3. 2023-07-21 05:14:47,192 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 4a7f959cb0cb6de634b4673eb2e845c3: 2023-07-21 05:14:47,195 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 05:14:47,196 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689916487196"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916487196"}]},"ts":"1689916487196"} 2023-07-21 05:14:47,197 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 05:14:47,198 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 05:14:47,198 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916487198"}]},"ts":"1689916487198"} 2023-07-21 05:14:47,199 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 05:14:47,203 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:47,204 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:47,204 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:47,204 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:47,204 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:47,204 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=4a7f959cb0cb6de634b4673eb2e845c3, ASSIGN}] 2023-07-21 05:14:47,205 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=4a7f959cb0cb6de634b4673eb2e845c3, ASSIGN 2023-07-21 05:14:47,205 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=4a7f959cb0cb6de634b4673eb2e845c3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37839,1689916486265; forceNewPlan=false, retain=false 2023-07-21 05:14:47,205 INFO [jenkins-hbase4:41465] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-21 05:14:47,207 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c445ace1c5aabcf0de02aaf524278130, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:47,207 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689916487207"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916487207"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916487207"}]},"ts":"1689916487207"} 2023-07-21 05:14:47,207 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=4a7f959cb0cb6de634b4673eb2e845c3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:47,207 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689916487207"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916487207"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916487207"}]},"ts":"1689916487207"} 2023-07-21 05:14:47,208 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure c445ace1c5aabcf0de02aaf524278130, server=jenkins-hbase4.apache.org,37839,1689916486265}] 2023-07-21 05:14:47,209 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 4a7f959cb0cb6de634b4673eb2e845c3, server=jenkins-hbase4.apache.org,37839,1689916486265}] 2023-07-21 05:14:47,360 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:47,360 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 05:14:47,362 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35912, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 05:14:47,366 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130. 2023-07-21 05:14:47,366 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c445ace1c5aabcf0de02aaf524278130, NAME => 'hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:47,366 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace c445ace1c5aabcf0de02aaf524278130 2023-07-21 05:14:47,366 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:47,366 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c445ace1c5aabcf0de02aaf524278130 2023-07-21 05:14:47,366 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c445ace1c5aabcf0de02aaf524278130 2023-07-21 05:14:47,367 INFO [StoreOpener-c445ace1c5aabcf0de02aaf524278130-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c445ace1c5aabcf0de02aaf524278130 2023-07-21 05:14:47,369 DEBUG [StoreOpener-c445ace1c5aabcf0de02aaf524278130-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/namespace/c445ace1c5aabcf0de02aaf524278130/info 2023-07-21 05:14:47,369 DEBUG [StoreOpener-c445ace1c5aabcf0de02aaf524278130-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/namespace/c445ace1c5aabcf0de02aaf524278130/info 2023-07-21 05:14:47,369 INFO [StoreOpener-c445ace1c5aabcf0de02aaf524278130-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c445ace1c5aabcf0de02aaf524278130 columnFamilyName info 2023-07-21 05:14:47,370 INFO [StoreOpener-c445ace1c5aabcf0de02aaf524278130-1] regionserver.HStore(310): Store=c445ace1c5aabcf0de02aaf524278130/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:47,370 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/namespace/c445ace1c5aabcf0de02aaf524278130 2023-07-21 05:14:47,371 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/namespace/c445ace1c5aabcf0de02aaf524278130 2023-07-21 05:14:47,373 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c445ace1c5aabcf0de02aaf524278130 2023-07-21 05:14:47,376 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/namespace/c445ace1c5aabcf0de02aaf524278130/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:47,376 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c445ace1c5aabcf0de02aaf524278130; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10250047680, jitterRate=-0.045389920473098755}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:47,376 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c445ace1c5aabcf0de02aaf524278130: 2023-07-21 05:14:47,377 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130., pid=8, masterSystemTime=1689916487360 2023-07-21 05:14:47,380 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130. 2023-07-21 05:14:47,381 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130. 2023-07-21 05:14:47,381 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3. 2023-07-21 05:14:47,381 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4a7f959cb0cb6de634b4673eb2e845c3, NAME => 'hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:47,381 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c445ace1c5aabcf0de02aaf524278130, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:47,381 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 05:14:47,382 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689916487381"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916487381"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916487381"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916487381"}]},"ts":"1689916487381"} 2023-07-21 05:14:47,382 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3. service=MultiRowMutationService 2023-07-21 05:14:47,382 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 05:14:47,382 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 4a7f959cb0cb6de634b4673eb2e845c3 2023-07-21 05:14:47,382 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:47,382 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4a7f959cb0cb6de634b4673eb2e845c3 2023-07-21 05:14:47,382 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4a7f959cb0cb6de634b4673eb2e845c3 2023-07-21 05:14:47,383 INFO [StoreOpener-4a7f959cb0cb6de634b4673eb2e845c3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 4a7f959cb0cb6de634b4673eb2e845c3 2023-07-21 05:14:47,384 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-21 05:14:47,384 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure c445ace1c5aabcf0de02aaf524278130, server=jenkins-hbase4.apache.org,37839,1689916486265 in 175 msec 2023-07-21 05:14:47,385 DEBUG [StoreOpener-4a7f959cb0cb6de634b4673eb2e845c3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/rsgroup/4a7f959cb0cb6de634b4673eb2e845c3/m 2023-07-21 05:14:47,385 DEBUG [StoreOpener-4a7f959cb0cb6de634b4673eb2e845c3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/rsgroup/4a7f959cb0cb6de634b4673eb2e845c3/m 2023-07-21 05:14:47,386 INFO [StoreOpener-4a7f959cb0cb6de634b4673eb2e845c3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4a7f959cb0cb6de634b4673eb2e845c3 columnFamilyName m 2023-07-21 05:14:47,386 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-21 05:14:47,386 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=c445ace1c5aabcf0de02aaf524278130, ASSIGN in 297 msec 2023-07-21 05:14:47,386 INFO [StoreOpener-4a7f959cb0cb6de634b4673eb2e845c3-1] regionserver.HStore(310): Store=4a7f959cb0cb6de634b4673eb2e845c3/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:47,387 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 05:14:47,387 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916487387"}]},"ts":"1689916487387"} 2023-07-21 05:14:47,387 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/rsgroup/4a7f959cb0cb6de634b4673eb2e845c3 2023-07-21 05:14:47,388 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/rsgroup/4a7f959cb0cb6de634b4673eb2e845c3 2023-07-21 05:14:47,388 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 05:14:47,390 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4a7f959cb0cb6de634b4673eb2e845c3 2023-07-21 05:14:47,390 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 05:14:47,392 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 347 msec 2023-07-21 05:14:47,392 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/rsgroup/4a7f959cb0cb6de634b4673eb2e845c3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:47,393 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4a7f959cb0cb6de634b4673eb2e845c3; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@511b814e, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:47,393 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4a7f959cb0cb6de634b4673eb2e845c3: 2023-07-21 05:14:47,393 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3., pid=9, masterSystemTime=1689916487360 2023-07-21 05:14:47,394 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3. 2023-07-21 05:14:47,394 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3. 2023-07-21 05:14:47,395 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=4a7f959cb0cb6de634b4673eb2e845c3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:47,395 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689916487395"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916487395"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916487395"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916487395"}]},"ts":"1689916487395"} 2023-07-21 05:14:47,397 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-21 05:14:47,397 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 4a7f959cb0cb6de634b4673eb2e845c3, server=jenkins-hbase4.apache.org,37839,1689916486265 in 187 msec 2023-07-21 05:14:47,398 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-21 05:14:47,398 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=4a7f959cb0cb6de634b4673eb2e845c3, ASSIGN in 193 msec 2023-07-21 05:14:47,399 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 05:14:47,399 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916487399"}]},"ts":"1689916487399"} 2023-07-21 05:14:47,400 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 05:14:47,402 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 05:14:47,403 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 241 msec 2023-07-21 05:14:47,405 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-21 05:14:47,450 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 05:14:47,452 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 05:14:47,452 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:47,459 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 05:14:47,461 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35914, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 05:14:47,465 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 05:14:47,469 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41465,1689916486204] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 05:14:47,469 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41465,1689916486204] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 05:14:47,479 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:47,479 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41465,1689916486204] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:47,480 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 05:14:47,482 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41465,1689916486204] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 05:14:47,484 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 19 msec 2023-07-21 05:14:47,485 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41465,1689916486204] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 05:14:47,486 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 05:14:47,494 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 05:14:47,496 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-21 05:14:47,506 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 05:14:47,511 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 05:14:47,511 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.106sec 2023-07-21 05:14:47,511 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-21 05:14:47,511 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 05:14:47,511 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 05:14:47,511 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41465,1689916486204-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 05:14:47,512 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41465,1689916486204-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 05:14:47,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 05:14:47,593 DEBUG [Listener at localhost/37815] zookeeper.ReadOnlyZKClient(139): Connect 0x5fb63931 to 127.0.0.1:53364 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:47,598 DEBUG [Listener at localhost/37815] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@27cf6cf2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:47,600 DEBUG [hconnection-0x3b30d156-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 05:14:47,602 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34318, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 05:14:47,603 INFO [Listener at localhost/37815] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,41465,1689916486204 2023-07-21 05:14:47,603 INFO [Listener at localhost/37815] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:47,606 DEBUG [Listener at localhost/37815] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 05:14:47,607 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52762, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 05:14:47,611 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 05:14:47,612 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:47,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-21 05:14:47,613 DEBUG [Listener at localhost/37815] zookeeper.ReadOnlyZKClient(139): Connect 0x127e508f to 127.0.0.1:53364 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:47,618 DEBUG [Listener at localhost/37815] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2ff999f7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:47,618 INFO [Listener at localhost/37815] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:53364 2023-07-21 05:14:47,622 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 05:14:47,623 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101864db52b000a connected 2023-07-21 05:14:47,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:47,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:47,628 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 05:14:47,640 INFO [Listener at localhost/37815] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 05:14:47,640 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:47,640 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:47,641 INFO [Listener at localhost/37815] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 05:14:47,641 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 05:14:47,641 INFO [Listener at localhost/37815] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 05:14:47,641 INFO [Listener at localhost/37815] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 05:14:47,641 INFO [Listener at localhost/37815] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38681 2023-07-21 05:14:47,642 INFO [Listener at localhost/37815] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 05:14:47,643 DEBUG [Listener at localhost/37815] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 05:14:47,644 INFO [Listener at localhost/37815] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:47,644 INFO [Listener at localhost/37815] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 05:14:47,645 INFO [Listener at localhost/37815] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38681 connecting to ZooKeeper ensemble=127.0.0.1:53364 2023-07-21 05:14:47,650 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:386810x0, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 05:14:47,652 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38681-0x101864db52b000b connected 2023-07-21 05:14:47,652 DEBUG [Listener at localhost/37815] zookeeper.ZKUtil(162): regionserver:38681-0x101864db52b000b, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 05:14:47,652 DEBUG [Listener at localhost/37815] zookeeper.ZKUtil(162): regionserver:38681-0x101864db52b000b, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 05:14:47,653 DEBUG [Listener at localhost/37815] zookeeper.ZKUtil(164): regionserver:38681-0x101864db52b000b, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 05:14:47,654 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38681 2023-07-21 05:14:47,654 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38681 2023-07-21 05:14:47,654 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38681 2023-07-21 05:14:47,654 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38681 2023-07-21 05:14:47,655 DEBUG [Listener at localhost/37815] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38681 2023-07-21 05:14:47,657 INFO [Listener at localhost/37815] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 05:14:47,657 INFO [Listener at localhost/37815] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 05:14:47,657 INFO [Listener at localhost/37815] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 05:14:47,657 INFO [Listener at localhost/37815] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 05:14:47,657 INFO [Listener at localhost/37815] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 05:14:47,657 INFO [Listener at localhost/37815] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 05:14:47,658 INFO [Listener at localhost/37815] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 05:14:47,658 INFO [Listener at localhost/37815] http.HttpServer(1146): Jetty bound to port 40011 2023-07-21 05:14:47,658 INFO [Listener at localhost/37815] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 05:14:47,662 INFO [Listener at localhost/37815] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:47,662 INFO [Listener at localhost/37815] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@78bcc1ef{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/hadoop.log.dir/,AVAILABLE} 2023-07-21 05:14:47,662 INFO [Listener at localhost/37815] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:47,662 INFO [Listener at localhost/37815] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@52f634f0{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-21 05:14:47,667 INFO [Listener at localhost/37815] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 05:14:47,668 INFO [Listener at localhost/37815] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 05:14:47,668 INFO [Listener at localhost/37815] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 05:14:47,669 INFO [Listener at localhost/37815] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 05:14:47,669 INFO [Listener at localhost/37815] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 05:14:47,670 INFO [Listener at localhost/37815] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3b5847ef{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:47,672 INFO [Listener at localhost/37815] server.AbstractConnector(333): Started ServerConnector@307a6dc1{HTTP/1.1, (http/1.1)}{0.0.0.0:40011} 2023-07-21 05:14:47,672 INFO [Listener at localhost/37815] server.Server(415): Started @44795ms 2023-07-21 05:14:47,675 INFO [RS:3;jenkins-hbase4:38681] regionserver.HRegionServer(951): ClusterId : c9ee95bd-ab56-4466-b217-9076a3b1615d 2023-07-21 05:14:47,677 DEBUG [RS:3;jenkins-hbase4:38681] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 05:14:47,679 DEBUG [RS:3;jenkins-hbase4:38681] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 05:14:47,679 DEBUG [RS:3;jenkins-hbase4:38681] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 05:14:47,681 DEBUG [RS:3;jenkins-hbase4:38681] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 05:14:47,682 DEBUG [RS:3;jenkins-hbase4:38681] zookeeper.ReadOnlyZKClient(139): Connect 0x6fd1c6d6 to 127.0.0.1:53364 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 05:14:47,686 DEBUG [RS:3;jenkins-hbase4:38681] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6116355c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 05:14:47,687 DEBUG [RS:3;jenkins-hbase4:38681] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6529ada6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 05:14:47,695 DEBUG [RS:3;jenkins-hbase4:38681] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:38681 2023-07-21 05:14:47,695 INFO [RS:3;jenkins-hbase4:38681] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 05:14:47,696 INFO [RS:3;jenkins-hbase4:38681] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 05:14:47,696 DEBUG [RS:3;jenkins-hbase4:38681] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 05:14:47,696 INFO [RS:3;jenkins-hbase4:38681] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41465,1689916486204 with isa=jenkins-hbase4.apache.org/172.31.14.131:38681, startcode=1689916487640 2023-07-21 05:14:47,696 DEBUG [RS:3;jenkins-hbase4:38681] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 05:14:47,699 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59301, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 05:14:47,699 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41465] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:47,699 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41465,1689916486204] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 05:14:47,700 DEBUG [RS:3;jenkins-hbase4:38681] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f 2023-07-21 05:14:47,700 DEBUG [RS:3;jenkins-hbase4:38681] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35849 2023-07-21 05:14:47,700 DEBUG [RS:3;jenkins-hbase4:38681] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44723 2023-07-21 05:14:47,705 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:47,705 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:47,705 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41465,1689916486204] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:47,705 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:47,705 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:47,706 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41465,1689916486204] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 05:14:47,706 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:47,706 DEBUG [RS:3;jenkins-hbase4:38681] zookeeper.ZKUtil(162): regionserver:38681-0x101864db52b000b, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:47,706 WARN [RS:3;jenkins-hbase4:38681] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 05:14:47,706 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38681,1689916487640] 2023-07-21 05:14:47,707 INFO [RS:3;jenkins-hbase4:38681] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 05:14:47,707 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:47,707 DEBUG [RS:3;jenkins-hbase4:38681] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/WALs/jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:47,707 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:47,707 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41465,1689916486204] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 05:14:47,707 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36007,1689916486355 2023-07-21 05:14:47,707 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36007,1689916486355 2023-07-21 05:14:47,707 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36007,1689916486355 2023-07-21 05:14:47,712 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:47,712 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:47,712 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:47,712 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36711,1689916486312 2023-07-21 05:14:47,712 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36711,1689916486312 2023-07-21 05:14:47,712 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36711,1689916486312 2023-07-21 05:14:47,714 DEBUG [RS:3;jenkins-hbase4:38681] zookeeper.ZKUtil(162): regionserver:38681-0x101864db52b000b, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:47,714 DEBUG [RS:3;jenkins-hbase4:38681] zookeeper.ZKUtil(162): regionserver:38681-0x101864db52b000b, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36007,1689916486355 2023-07-21 05:14:47,714 DEBUG [RS:3;jenkins-hbase4:38681] zookeeper.ZKUtil(162): regionserver:38681-0x101864db52b000b, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:47,715 DEBUG [RS:3;jenkins-hbase4:38681] zookeeper.ZKUtil(162): regionserver:38681-0x101864db52b000b, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36711,1689916486312 2023-07-21 05:14:47,715 DEBUG [RS:3;jenkins-hbase4:38681] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 05:14:47,715 INFO [RS:3;jenkins-hbase4:38681] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 05:14:47,717 INFO [RS:3;jenkins-hbase4:38681] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 05:14:47,717 INFO [RS:3;jenkins-hbase4:38681] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 05:14:47,717 INFO [RS:3;jenkins-hbase4:38681] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:47,717 INFO [RS:3;jenkins-hbase4:38681] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 05:14:47,719 INFO [RS:3;jenkins-hbase4:38681] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:47,720 DEBUG [RS:3;jenkins-hbase4:38681] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:47,720 DEBUG [RS:3;jenkins-hbase4:38681] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:47,720 DEBUG [RS:3;jenkins-hbase4:38681] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:47,720 DEBUG [RS:3;jenkins-hbase4:38681] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:47,720 DEBUG [RS:3;jenkins-hbase4:38681] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:47,720 DEBUG [RS:3;jenkins-hbase4:38681] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 05:14:47,720 DEBUG [RS:3;jenkins-hbase4:38681] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:47,720 DEBUG [RS:3;jenkins-hbase4:38681] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:47,720 DEBUG [RS:3;jenkins-hbase4:38681] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:47,720 DEBUG [RS:3;jenkins-hbase4:38681] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 05:14:47,721 INFO [RS:3;jenkins-hbase4:38681] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:47,721 INFO [RS:3;jenkins-hbase4:38681] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:47,721 INFO [RS:3;jenkins-hbase4:38681] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:47,734 INFO [RS:3;jenkins-hbase4:38681] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 05:14:47,734 INFO [RS:3;jenkins-hbase4:38681] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38681,1689916487640-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 05:14:47,745 INFO [RS:3;jenkins-hbase4:38681] regionserver.Replication(203): jenkins-hbase4.apache.org,38681,1689916487640 started 2023-07-21 05:14:47,745 INFO [RS:3;jenkins-hbase4:38681] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38681,1689916487640, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38681, sessionid=0x101864db52b000b 2023-07-21 05:14:47,745 DEBUG [RS:3;jenkins-hbase4:38681] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 05:14:47,745 DEBUG [RS:3;jenkins-hbase4:38681] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:47,745 DEBUG [RS:3;jenkins-hbase4:38681] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38681,1689916487640' 2023-07-21 05:14:47,745 DEBUG [RS:3;jenkins-hbase4:38681] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 05:14:47,746 DEBUG [RS:3;jenkins-hbase4:38681] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 05:14:47,746 DEBUG [RS:3;jenkins-hbase4:38681] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 05:14:47,746 DEBUG [RS:3;jenkins-hbase4:38681] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 05:14:47,746 DEBUG [RS:3;jenkins-hbase4:38681] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:47,746 DEBUG [RS:3;jenkins-hbase4:38681] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38681,1689916487640' 2023-07-21 05:14:47,746 DEBUG [RS:3;jenkins-hbase4:38681] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 05:14:47,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:47,746 DEBUG [RS:3;jenkins-hbase4:38681] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 05:14:47,747 DEBUG [RS:3;jenkins-hbase4:38681] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 05:14:47,747 INFO [RS:3;jenkins-hbase4:38681] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 05:14:47,747 INFO [RS:3;jenkins-hbase4:38681] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 05:14:47,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:47,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:47,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:47,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:47,753 DEBUG [hconnection-0x51a2e694-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 05:14:47,755 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34332, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 05:14:47,760 DEBUG [hconnection-0x51a2e694-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 05:14:47,761 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35920, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 05:14:47,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:47,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:47,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41465] to rsgroup master 2023-07-21 05:14:47,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:47,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:52762 deadline: 1689917687766, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. 2023-07-21 05:14:47,767 WARN [Listener at localhost/37815] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:47,768 INFO [Listener at localhost/37815] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:47,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:47,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:47,769 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36007, jenkins-hbase4.apache.org:36711, jenkins-hbase4.apache.org:37839, jenkins-hbase4.apache.org:38681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:47,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:47,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:47,828 INFO [Listener at localhost/37815] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=565 (was 516) Potentially hanging thread: jenkins-hbase4:36007Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x51a2e694-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp117544939-2331 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f-prefix:jenkins-hbase4.apache.org,36007,1689916486355 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@370f06c2 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 2125164216@qtp-45673379-0 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34685 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp117544939-2332 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 35849 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x5ce75a9c-SendThread(127.0.0.1:53364) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:1;jenkins-hbase4:36711 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:38681-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1826966699-2605 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-360215325_17 at /127.0.0.1:33674 [Receiving block BP-1240127492-172.31.14.131-1689916485425:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:37015 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: BP-1240127492-172.31.14.131-1689916485425 heartbeating to localhost/127.0.0.1:35849 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 36809 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1240127492-172.31.14.131-1689916485425:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-996848069_17 at /127.0.0.1:56236 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:36007 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1957659471-2337 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:37015 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x127e508f-SendThread(127.0.0.1:53364) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: 1378151966@qtp-1775962779-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41179 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-360215325_17 at /127.0.0.1:33684 [Receiving block BP-1240127492-172.31.14.131-1689916485425:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 35091 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/37815-SendThread(127.0.0.1:53364) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1826966699-2606 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41465 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1338690874_17 at /127.0.0.1:33598 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 35091 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@41f16d49 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f42a090-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 36809 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@1da00a90 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37815-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp117544939-2325 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1063900309-2236 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data1/current/BP-1240127492-172.31.14.131-1689916485425 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-996848069_17 at /127.0.0.1:56266 [Receiving block BP-1240127492-172.31.14.131-1689916485425:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1063900309-2239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1240127492-172.31.14.131-1689916485425:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x0121ab8d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/136145594.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 35091 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1240127492-172.31.14.131-1689916485425:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1826966699-2603 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f42a090-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp267518829-2295 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1826966699-2601-acceptor-0@1efd747d-ServerConnector@307a6dc1{HTTP/1.1, (http/1.1)}{0.0.0.0:40011} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:37839Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41465 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 36809 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: globalEventExecutor-1-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) io.netty.util.concurrent.GlobalEventExecutor.takeTask(GlobalEventExecutor.java:95) io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:239) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1240127492-172.31.14.131-1689916485425:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:37839 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1063900309-2234 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x5fb63931-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1826966699-2602 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp267518829-2299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp267518829-2301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x6fd1c6d6-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1728531437-2271 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-360215325_17 at /127.0.0.1:56306 [Receiving block BP-1240127492-172.31.14.131-1689916485425:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1728531437-2270 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 35091 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f-prefix:jenkins-hbase4.apache.org,36007,1689916486355.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1728531437-2269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@14ed544e java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x7dd65c46-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase4:41465 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins@localhost:37015 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@733c7fce[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60035@0x0900ac95-SendThread(127.0.0.1:60035) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x5ce75a9c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-54fbbca0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp267518829-2298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1338690874_17 at /127.0.0.1:49646 [Receiving block BP-1240127492-172.31.14.131-1689916485425:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1728531437-2272 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f42a090-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1826966699-2600 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-6a2e17b3-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1957659471-2341 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:35849 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 35849 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1957659471-2343 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1728531437-2267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41465 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:35849 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:35849 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Session-HouseKeeper-34bf376c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 37815 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@ecfafed[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-52c21f14-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:36711-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1338690874_17 at /127.0.0.1:33652 [Receiving block BP-1240127492-172.31.14.131-1689916485425:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:3;jenkins-hbase4:38681 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37815-SendThread(127.0.0.1:53364) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x5fb63931 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/136145594.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x51a2e694-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:37015 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1957659471-2338 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@59426e55 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 37815 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 4 on default port 35849 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:53364 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data3/current/BP-1240127492-172.31.14.131-1689916485425 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:37015 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 37815 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1240127492-172.31.14.131-1689916485425:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x0121ab8d-SendThread(127.0.0.1:53364) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/37815 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp267518829-2297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@7aa65829 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x7dd65c46-SendThread(127.0.0.1:53364) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:35849 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x2f42a090-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41465 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-1240127492-172.31.14.131-1689916485425:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@19a837ad sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1957659471-2336 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37815-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41465 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/37815-SendThread(127.0.0.1:53364) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1957659471-2342 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3b30d156-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x127e508f-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41465 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 36809 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1728531437-2268 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f42a090-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:35849 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-1240127492-172.31.14.131-1689916485425:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1240127492-172.31.14.131-1689916485425:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: 1829633885@qtp-1850898591-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: hconnection-0x2f42a090-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689916486559 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data2/current/BP-1240127492-172.31.14.131-1689916485425 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@7ed125ba java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1826966699-2607 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 35849 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x3a3e2c2d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data6/current/BP-1240127492-172.31.14.131-1689916485425 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:35849 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp267518829-2300 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x7dd65c46 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/136145594.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37815.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: PacketResponder: BP-1240127492-172.31.14.131-1689916485425:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp117544939-2328 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x6fd1c6d6-SendThread(127.0.0.1:53364) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: 1911643004@qtp-1850898591-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38623 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-996848069_17 at /127.0.0.1:33624 [Receiving block BP-1240127492-172.31.14.131-1689916485425:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x2f42a090-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:35849 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:35849 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 145838619@qtp-1424316824-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35113 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data5/current/BP-1240127492-172.31.14.131-1689916485425 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1063900309-2238 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data4/current/BP-1240127492-172.31.14.131-1689916485425 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1311553686@qtp-45673379-1 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-360215325_17 at /127.0.0.1:49682 [Receiving block BP-1240127492-172.31.14.131-1689916485425:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37815-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1063900309-2240 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41465 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1957659471-2339 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-996848069_17 at /127.0.0.1:49624 [Receiving block BP-1240127492-172.31.14.131-1689916485425:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37815-SendThread(127.0.0.1:53364) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37815-SendThread(127.0.0.1:53364) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 0 on default port 35091 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f42a090-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-49f49fc6-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1728531437-2265 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: 701820731@qtp-1424316824-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/MasterData-prefix:jenkins-hbase4.apache.org,41465,1689916486204 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f-prefix:jenkins-hbase4.apache.org,37839,1689916486265 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 37815 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:37015 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41465 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@5a75ccb6 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1306846356_17 at /127.0.0.1:49660 [Receiving block BP-1240127492-172.31.14.131-1689916485425:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 36809 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp117544939-2329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:36711Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1240127492-172.31.14.131-1689916485425:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1826966699-2604 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:37015 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x5ce75a9c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/136145594.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1063900309-2235-acceptor-0@6fb9108f-ServerConnector@17bb529f{HTTP/1.1, (http/1.1)}{0.0.0.0:44723} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1240127492-172.31.14.131-1689916485425:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@421fb3dd java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:35849 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42797,1689916480560 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41465 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-1240127492-172.31.14.131-1689916485425 heartbeating to localhost/127.0.0.1:35849 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37815-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60035@0x0900ac95 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/136145594.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x6fd1c6d6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/136145594.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1306846356_17 at /127.0.0.1:33658 [Receiving block BP-1240127492-172.31.14.131-1689916485425:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@dd43c78[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@71a6ef0b java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37815.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@695ba0f7 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1338690874_17 at /127.0.0.1:56292 [Receiving block BP-1240127492-172.31.14.131-1689916485425:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x5fb63931-SendThread(127.0.0.1:53364) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 1 on default port 35849 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: CacheReplicationMonitor(1755313123) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: Listener at localhost/40271-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: 171347273@qtp-1775962779-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp1063900309-2237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689916486562 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 35091 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 3 on default port 35849 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ForkJoinPool-2-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS:2;jenkins-hbase4:36007-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp267518829-2302 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 37815 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37815.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1306846356_17 at /127.0.0.1:56322 [Receiving block BP-1240127492-172.31.14.131-1689916485425:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: jenkins-hbase4:38681Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f-prefix:jenkins-hbase4.apache.org,36711,1689916486312 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:37839-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 37815 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ProcessThread(sid:0 cport:53364): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: IPC Server handler 2 on default port 36809 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:37015 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-1240127492-172.31.14.131-1689916485425:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:35849 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/37815-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp117544939-2326-acceptor-0@3a326737-ServerConnector@46a2470b{HTTP/1.1, (http/1.1)}{0.0.0.0:37567} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41465,1689916486204 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: BP-1240127492-172.31.14.131-1689916485425 heartbeating to localhost/127.0.0.1:35849 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1240127492-172.31.14.131-1689916485425:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38681 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60035@0x0900ac95-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/40271-SendThread(127.0.0.1:60035) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-360215325_17 at /127.0.0.1:56330 [Receiving block BP-1240127492-172.31.14.131-1689916485425:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (340318198) connection to localhost/127.0.0.1:37015 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: M:0;jenkins-hbase4:41465 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1240127492-172.31.14.131-1689916485425:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37815-SendThread(127.0.0.1:53364) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@26675bd2 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp117544939-2327 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp117544939-2330 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp267518829-2296-acceptor-0@5fe671ed-ServerConnector@28b0b21b{HTTP/1.1, (http/1.1)}{0.0.0.0:46225} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1240127492-172.31.14.131-1689916485425:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36711 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1957659471-2340-acceptor-0@33fe585c-ServerConnector@7a6873aa{HTTP/1.1, (http/1.1)}{0.0.0.0:39393} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1063900309-2241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1728531437-2266-acceptor-0@7b8f20e8-ServerConnector@5a351aef{HTTP/1.1, (http/1.1)}{0.0.0.0:35115} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37815-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x127e508f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/136145594.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-360215325_17 at /127.0.0.1:49668 [Receiving block BP-1240127492-172.31.14.131-1689916485425:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x3a3e2c2d-SendThread(127.0.0.1:53364) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x0121ab8d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53364@0x3a3e2c2d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/136145594.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37815.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) - Thread LEAK? -, OpenFileDescriptor=819 (was 801) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=438 (was 438), ProcessCount=174 (was 174), AvailableMemoryMB=3481 (was 3646) 2023-07-21 05:14:47,832 WARN [Listener at localhost/37815] hbase.ResourceChecker(130): Thread=565 is superior to 500 2023-07-21 05:14:47,849 INFO [RS:3;jenkins-hbase4:38681] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38681%2C1689916487640, suffix=, logDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/WALs/jenkins-hbase4.apache.org,38681,1689916487640, archiveDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/oldWALs, maxLogs=32 2023-07-21 05:14:47,850 INFO [Listener at localhost/37815] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=565, OpenFileDescriptor=819, MaxFileDescriptor=60000, SystemLoadAverage=438, ProcessCount=174, AvailableMemoryMB=3479 2023-07-21 05:14:47,850 WARN [Listener at localhost/37815] hbase.ResourceChecker(130): Thread=565 is superior to 500 2023-07-21 05:14:47,851 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-21 05:14:47,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:47,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:47,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:47,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:47,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:47,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:47,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:47,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:47,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:47,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:47,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:47,872 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44725,DS-6bfeb7ff-bbb3-492b-96da-3baf630cebf8,DISK] 2023-07-21 05:14:47,872 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36759,DS-860d6c57-d735-4cdf-9619-83aac57320ef,DISK] 2023-07-21 05:14:47,872 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36355,DS-1b56bf0b-21b4-496a-8b90-ec8643561175,DISK] 2023-07-21 05:14:47,873 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:47,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:47,874 INFO [RS:3;jenkins-hbase4:38681] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/WALs/jenkins-hbase4.apache.org,38681,1689916487640/jenkins-hbase4.apache.org%2C38681%2C1689916487640.1689916487849 2023-07-21 05:14:47,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:47,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:47,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:47,878 DEBUG [RS:3;jenkins-hbase4:38681] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44725,DS-6bfeb7ff-bbb3-492b-96da-3baf630cebf8,DISK], DatanodeInfoWithStorage[127.0.0.1:36355,DS-1b56bf0b-21b4-496a-8b90-ec8643561175,DISK], DatanodeInfoWithStorage[127.0.0.1:36759,DS-860d6c57-d735-4cdf-9619-83aac57320ef,DISK]] 2023-07-21 05:14:47,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:47,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:47,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:47,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41465] to rsgroup master 2023-07-21 05:14:47,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:47,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:52762 deadline: 1689917687888, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. 2023-07-21 05:14:47,889 WARN [Listener at localhost/37815] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:47,891 INFO [Listener at localhost/37815] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:47,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:47,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:47,892 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36007, jenkins-hbase4.apache.org:36711, jenkins-hbase4.apache.org:37839, jenkins-hbase4.apache.org:38681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:47,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:47,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:47,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:47,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-21 05:14:47,898 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:47,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-21 05:14:47,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 05:14:47,900 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:47,900 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:47,901 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:47,902 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 05:14:47,904 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp/data/default/t1/82b68b936de10070df663d64675649fd 2023-07-21 05:14:47,905 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp/data/default/t1/82b68b936de10070df663d64675649fd empty. 2023-07-21 05:14:47,905 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp/data/default/t1/82b68b936de10070df663d64675649fd 2023-07-21 05:14:47,905 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-21 05:14:47,925 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-21 05:14:47,928 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 82b68b936de10070df663d64675649fd, NAME => 't1,,1689916487894.82b68b936de10070df663d64675649fd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp 2023-07-21 05:14:47,941 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689916487894.82b68b936de10070df663d64675649fd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:47,941 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 82b68b936de10070df663d64675649fd, disabling compactions & flushes 2023-07-21 05:14:47,941 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689916487894.82b68b936de10070df663d64675649fd. 2023-07-21 05:14:47,941 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689916487894.82b68b936de10070df663d64675649fd. 2023-07-21 05:14:47,941 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689916487894.82b68b936de10070df663d64675649fd. after waiting 0 ms 2023-07-21 05:14:47,941 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689916487894.82b68b936de10070df663d64675649fd. 2023-07-21 05:14:47,941 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689916487894.82b68b936de10070df663d64675649fd. 2023-07-21 05:14:47,941 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 82b68b936de10070df663d64675649fd: 2023-07-21 05:14:47,944 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 05:14:47,944 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689916487894.82b68b936de10070df663d64675649fd.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689916487944"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916487944"}]},"ts":"1689916487944"} 2023-07-21 05:14:47,946 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 05:14:47,947 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 05:14:47,947 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916487947"}]},"ts":"1689916487947"} 2023-07-21 05:14:47,948 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-21 05:14:47,951 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 05:14:47,952 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 05:14:47,952 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 05:14:47,952 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 05:14:47,952 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 05:14:47,952 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 05:14:47,952 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=82b68b936de10070df663d64675649fd, ASSIGN}] 2023-07-21 05:14:47,953 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=82b68b936de10070df663d64675649fd, ASSIGN 2023-07-21 05:14:47,954 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=82b68b936de10070df663d64675649fd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38681,1689916487640; forceNewPlan=false, retain=false 2023-07-21 05:14:48,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 05:14:48,104 INFO [jenkins-hbase4:41465] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 05:14:48,106 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=82b68b936de10070df663d64675649fd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:48,106 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689916487894.82b68b936de10070df663d64675649fd.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689916488106"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916488106"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916488106"}]},"ts":"1689916488106"} 2023-07-21 05:14:48,107 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 82b68b936de10070df663d64675649fd, server=jenkins-hbase4.apache.org,38681,1689916487640}] 2023-07-21 05:14:48,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 05:14:48,260 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:48,260 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 05:14:48,262 INFO [RS-EventLoopGroup-16-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59086, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 05:14:48,265 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689916487894.82b68b936de10070df663d64675649fd. 2023-07-21 05:14:48,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 82b68b936de10070df663d64675649fd, NAME => 't1,,1689916487894.82b68b936de10070df663d64675649fd.', STARTKEY => '', ENDKEY => ''} 2023-07-21 05:14:48,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 82b68b936de10070df663d64675649fd 2023-07-21 05:14:48,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689916487894.82b68b936de10070df663d64675649fd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 05:14:48,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 82b68b936de10070df663d64675649fd 2023-07-21 05:14:48,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 82b68b936de10070df663d64675649fd 2023-07-21 05:14:48,267 INFO [StoreOpener-82b68b936de10070df663d64675649fd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 82b68b936de10070df663d64675649fd 2023-07-21 05:14:48,268 DEBUG [StoreOpener-82b68b936de10070df663d64675649fd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/default/t1/82b68b936de10070df663d64675649fd/cf1 2023-07-21 05:14:48,268 DEBUG [StoreOpener-82b68b936de10070df663d64675649fd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/default/t1/82b68b936de10070df663d64675649fd/cf1 2023-07-21 05:14:48,269 INFO [StoreOpener-82b68b936de10070df663d64675649fd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 82b68b936de10070df663d64675649fd columnFamilyName cf1 2023-07-21 05:14:48,269 INFO [StoreOpener-82b68b936de10070df663d64675649fd-1] regionserver.HStore(310): Store=82b68b936de10070df663d64675649fd/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 05:14:48,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/default/t1/82b68b936de10070df663d64675649fd 2023-07-21 05:14:48,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/default/t1/82b68b936de10070df663d64675649fd 2023-07-21 05:14:48,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 82b68b936de10070df663d64675649fd 2023-07-21 05:14:48,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/default/t1/82b68b936de10070df663d64675649fd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 05:14:48,276 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 82b68b936de10070df663d64675649fd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11260821760, jitterRate=0.04874575138092041}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 05:14:48,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 82b68b936de10070df663d64675649fd: 2023-07-21 05:14:48,277 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689916487894.82b68b936de10070df663d64675649fd., pid=14, masterSystemTime=1689916488260 2023-07-21 05:14:48,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689916487894.82b68b936de10070df663d64675649fd. 2023-07-21 05:14:48,280 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689916487894.82b68b936de10070df663d64675649fd. 2023-07-21 05:14:48,281 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=82b68b936de10070df663d64675649fd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:48,281 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689916487894.82b68b936de10070df663d64675649fd.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689916488281"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689916488281"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689916488281"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689916488281"}]},"ts":"1689916488281"} 2023-07-21 05:14:48,283 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-21 05:14:48,283 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 82b68b936de10070df663d64675649fd, server=jenkins-hbase4.apache.org,38681,1689916487640 in 175 msec 2023-07-21 05:14:48,284 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-21 05:14:48,285 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=82b68b936de10070df663d64675649fd, ASSIGN in 331 msec 2023-07-21 05:14:48,285 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 05:14:48,285 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916488285"}]},"ts":"1689916488285"} 2023-07-21 05:14:48,286 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-21 05:14:48,290 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 05:14:48,291 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 395 msec 2023-07-21 05:14:48,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 05:14:48,502 INFO [Listener at localhost/37815] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-21 05:14:48,502 DEBUG [Listener at localhost/37815] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-21 05:14:48,502 INFO [Listener at localhost/37815] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:48,504 INFO [Listener at localhost/37815] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-21 05:14:48,505 INFO [Listener at localhost/37815] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:48,505 INFO [Listener at localhost/37815] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-21 05:14:48,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 05:14:48,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-21 05:14:48,509 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 05:14:48,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-21 05:14:48,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 354 connection: 172.31.14.131:52762 deadline: 1689916548506, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-21 05:14:48,511 INFO [Listener at localhost/37815] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:48,513 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=5 msec 2023-07-21 05:14:48,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:48,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:48,613 INFO [Listener at localhost/37815] client.HBaseAdmin$15(890): Started disable of t1 2023-07-21 05:14:48,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-21 05:14:48,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-21 05:14:48,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 05:14:48,617 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916488617"}]},"ts":"1689916488617"} 2023-07-21 05:14:48,618 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-21 05:14:48,620 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-21 05:14:48,620 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=82b68b936de10070df663d64675649fd, UNASSIGN}] 2023-07-21 05:14:48,621 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=82b68b936de10070df663d64675649fd, UNASSIGN 2023-07-21 05:14:48,622 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=82b68b936de10070df663d64675649fd, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:48,622 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689916487894.82b68b936de10070df663d64675649fd.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689916488622"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689916488622"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689916488622"}]},"ts":"1689916488622"} 2023-07-21 05:14:48,623 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 82b68b936de10070df663d64675649fd, server=jenkins-hbase4.apache.org,38681,1689916487640}] 2023-07-21 05:14:48,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 05:14:48,775 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 82b68b936de10070df663d64675649fd 2023-07-21 05:14:48,775 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 82b68b936de10070df663d64675649fd, disabling compactions & flushes 2023-07-21 05:14:48,775 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689916487894.82b68b936de10070df663d64675649fd. 2023-07-21 05:14:48,775 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689916487894.82b68b936de10070df663d64675649fd. 2023-07-21 05:14:48,775 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689916487894.82b68b936de10070df663d64675649fd. after waiting 0 ms 2023-07-21 05:14:48,775 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689916487894.82b68b936de10070df663d64675649fd. 2023-07-21 05:14:48,780 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/default/t1/82b68b936de10070df663d64675649fd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 05:14:48,780 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689916487894.82b68b936de10070df663d64675649fd. 2023-07-21 05:14:48,780 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 82b68b936de10070df663d64675649fd: 2023-07-21 05:14:48,782 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 82b68b936de10070df663d64675649fd 2023-07-21 05:14:48,782 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=82b68b936de10070df663d64675649fd, regionState=CLOSED 2023-07-21 05:14:48,782 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689916487894.82b68b936de10070df663d64675649fd.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689916488782"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689916488782"}]},"ts":"1689916488782"} 2023-07-21 05:14:48,786 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-21 05:14:48,786 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 82b68b936de10070df663d64675649fd, server=jenkins-hbase4.apache.org,38681,1689916487640 in 160 msec 2023-07-21 05:14:48,788 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-21 05:14:48,788 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=82b68b936de10070df663d64675649fd, UNASSIGN in 166 msec 2023-07-21 05:14:48,788 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689916488788"}]},"ts":"1689916488788"} 2023-07-21 05:14:48,789 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-21 05:14:48,792 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-21 05:14:48,794 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 179 msec 2023-07-21 05:14:48,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 05:14:48,919 INFO [Listener at localhost/37815] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-21 05:14:48,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-21 05:14:48,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-21 05:14:48,923 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-21 05:14:48,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-21 05:14:48,924 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-21 05:14:48,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:48,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:48,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:48,927 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp/data/default/t1/82b68b936de10070df663d64675649fd 2023-07-21 05:14:48,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 05:14:48,929 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp/data/default/t1/82b68b936de10070df663d64675649fd/cf1, FileablePath, hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp/data/default/t1/82b68b936de10070df663d64675649fd/recovered.edits] 2023-07-21 05:14:48,935 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp/data/default/t1/82b68b936de10070df663d64675649fd/recovered.edits/4.seqid to hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/archive/data/default/t1/82b68b936de10070df663d64675649fd/recovered.edits/4.seqid 2023-07-21 05:14:48,936 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/.tmp/data/default/t1/82b68b936de10070df663d64675649fd 2023-07-21 05:14:48,936 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-21 05:14:48,938 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-21 05:14:48,940 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-21 05:14:48,941 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-21 05:14:48,942 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-21 05:14:48,942 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-21 05:14:48,943 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689916487894.82b68b936de10070df663d64675649fd.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689916488942"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:48,944 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 05:14:48,944 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 82b68b936de10070df663d64675649fd, NAME => 't1,,1689916487894.82b68b936de10070df663d64675649fd.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 05:14:48,944 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-21 05:14:48,944 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689916488944"}]},"ts":"9223372036854775807"} 2023-07-21 05:14:48,945 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-21 05:14:48,947 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-21 05:14:48,948 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 27 msec 2023-07-21 05:14:49,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 05:14:49,030 INFO [Listener at localhost/37815] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-21 05:14:49,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:49,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:49,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:49,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:49,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:49,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:49,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:49,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:49,048 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:49,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:49,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:49,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:49,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:49,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41465] to rsgroup master 2023-07-21 05:14:49,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:49,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:52762 deadline: 1689917689058, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. 2023-07-21 05:14:49,059 WARN [Listener at localhost/37815] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:49,071 INFO [Listener at localhost/37815] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:49,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,073 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36007, jenkins-hbase4.apache.org:36711, jenkins-hbase4.apache.org:37839, jenkins-hbase4.apache.org:38681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:49,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:49,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:49,099 INFO [Listener at localhost/37815] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=573 (was 565) - Thread LEAK? -, OpenFileDescriptor=830 (was 819) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=419 (was 438), ProcessCount=174 (was 174), AvailableMemoryMB=3496 (was 3479) - AvailableMemoryMB LEAK? - 2023-07-21 05:14:49,099 WARN [Listener at localhost/37815] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-21 05:14:49,124 INFO [Listener at localhost/37815] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=573, OpenFileDescriptor=830, MaxFileDescriptor=60000, SystemLoadAverage=419, ProcessCount=174, AvailableMemoryMB=3492 2023-07-21 05:14:49,124 WARN [Listener at localhost/37815] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-21 05:14:49,124 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-21 05:14:49,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:49,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:49,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:49,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:49,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:49,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:49,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:49,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:49,138 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:49,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:49,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:49,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:49,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:49,150 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,150 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41465] to rsgroup master 2023-07-21 05:14:49,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:49,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52762 deadline: 1689917689153, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. 2023-07-21 05:14:49,154 WARN [Listener at localhost/37815] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:49,156 INFO [Listener at localhost/37815] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:49,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,157 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36007, jenkins-hbase4.apache.org:36711, jenkins-hbase4.apache.org:37839, jenkins-hbase4.apache.org:38681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:49,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:49,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:49,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-21 05:14:49,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 05:14:49,160 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-21 05:14:49,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-21 05:14:49,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 05:14:49,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:49,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:49,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:49,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:49,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:49,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:49,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:49,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:49,189 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:49,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:49,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:49,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:49,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:49,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41465] to rsgroup master 2023-07-21 05:14:49,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:49,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52762 deadline: 1689917689199, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. 2023-07-21 05:14:49,200 WARN [Listener at localhost/37815] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:49,202 INFO [Listener at localhost/37815] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:49,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,203 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36007, jenkins-hbase4.apache.org:36711, jenkins-hbase4.apache.org:37839, jenkins-hbase4.apache.org:38681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:49,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:49,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:49,228 INFO [Listener at localhost/37815] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=575 (was 573) - Thread LEAK? -, OpenFileDescriptor=830 (was 830), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=419 (was 419), ProcessCount=174 (was 174), AvailableMemoryMB=3494 (was 3492) - AvailableMemoryMB LEAK? - 2023-07-21 05:14:49,228 WARN [Listener at localhost/37815] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-21 05:14:49,248 INFO [Listener at localhost/37815] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=575, OpenFileDescriptor=830, MaxFileDescriptor=60000, SystemLoadAverage=419, ProcessCount=174, AvailableMemoryMB=3494 2023-07-21 05:14:49,248 WARN [Listener at localhost/37815] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-21 05:14:49,248 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-21 05:14:49,253 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 05:14:49,253 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-21 05:14:49,254 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 05:14:49,254 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-21 05:14:49,254 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 05:14:49,254 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-21 05:14:49,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:49,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:49,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:49,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:49,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:49,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:49,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:49,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:49,268 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:49,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:49,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:49,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:49,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:49,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41465] to rsgroup master 2023-07-21 05:14:49,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:49,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52762 deadline: 1689917689281, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. 2023-07-21 05:14:49,286 WARN [Listener at localhost/37815] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:49,288 INFO [Listener at localhost/37815] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:49,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,291 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36007, jenkins-hbase4.apache.org:36711, jenkins-hbase4.apache.org:37839, jenkins-hbase4.apache.org:38681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:49,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:49,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:49,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:49,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:49,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:49,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:49,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:49,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:49,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:49,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:49,315 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:49,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:49,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:49,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:49,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:49,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41465] to rsgroup master 2023-07-21 05:14:49,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:49,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52762 deadline: 1689917689325, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. 2023-07-21 05:14:49,326 WARN [Listener at localhost/37815] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:49,328 INFO [Listener at localhost/37815] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:49,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,329 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36007, jenkins-hbase4.apache.org:36711, jenkins-hbase4.apache.org:37839, jenkins-hbase4.apache.org:38681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:49,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:49,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:49,350 INFO [Listener at localhost/37815] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=576 (was 575) - Thread LEAK? -, OpenFileDescriptor=830 (was 830), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=419 (was 419), ProcessCount=174 (was 174), AvailableMemoryMB=3490 (was 3494) 2023-07-21 05:14:49,350 WARN [Listener at localhost/37815] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-21 05:14:49,368 INFO [Listener at localhost/37815] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576, OpenFileDescriptor=830, MaxFileDescriptor=60000, SystemLoadAverage=419, ProcessCount=174, AvailableMemoryMB=3489 2023-07-21 05:14:49,368 WARN [Listener at localhost/37815] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-21 05:14:49,368 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-21 05:14:49,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:49,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:49,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:49,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:49,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:49,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:49,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:49,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:49,383 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:49,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:49,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:49,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:49,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:49,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41465] to rsgroup master 2023-07-21 05:14:49,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:49,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52762 deadline: 1689917689392, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. 2023-07-21 05:14:49,392 WARN [Listener at localhost/37815] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:49,394 INFO [Listener at localhost/37815] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:49,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,395 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36007, jenkins-hbase4.apache.org:36711, jenkins-hbase4.apache.org:37839, jenkins-hbase4.apache.org:38681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:49,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:49,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:49,396 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-21 05:14:49,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-21 05:14:49,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-21 05:14:49,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:49,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 05:14:49,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:49,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-21 05:14:49,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-21 05:14:49,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 05:14:49,422 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 05:14:49,426 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 17 msec 2023-07-21 05:14:49,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 05:14:49,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-21 05:14:49,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:49,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:52762 deadline: 1689917689515, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-21 05:14:49,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-21 05:14:49,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-21 05:14:49,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 05:14:49,539 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-21 05:14:49,540 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 16 msec 2023-07-21 05:14:49,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 05:14:49,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-21 05:14:49,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-21 05:14:49,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-21 05:14:49,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:49,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 05:14:49,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:49,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-21 05:14:49,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 05:14:49,658 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 05:14:49,660 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 05:14:49,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-21 05:14:49,662 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 05:14:49,663 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-21 05:14:49,663 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 05:14:49,664 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 05:14:49,665 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 05:14:49,667 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-21 05:14:49,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-21 05:14:49,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-21 05:14:49,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-21 05:14:49,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:49,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 05:14:49,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:49,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:49,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:52762 deadline: 1689916549775, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-21 05:14:49,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:49,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:49,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:49,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:49,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:49,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-21 05:14:49,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:49,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 05:14:49,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:49,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 05:14:49,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 05:14:49,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 05:14:49,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 05:14:49,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 05:14:49,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 05:14:49,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 05:14:49,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 05:14:49,801 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 05:14:49,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 05:14:49,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 05:14:49,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 05:14:49,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 05:14:49,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 05:14:49,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41465] to rsgroup master 2023-07-21 05:14:49,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 05:14:49,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52762 deadline: 1689917689813, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. 2023-07-21 05:14:49,814 WARN [Listener at localhost/37815] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41465 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 05:14:49,816 INFO [Listener at localhost/37815] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 05:14:49,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 05:14:49,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 05:14:49,817 INFO [Listener at localhost/37815] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36007, jenkins-hbase4.apache.org:36711, jenkins-hbase4.apache.org:37839, jenkins-hbase4.apache.org:38681], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 05:14:49,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 05:14:49,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41465] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 05:14:49,839 INFO [Listener at localhost/37815] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576 (was 576), OpenFileDescriptor=830 (was 830), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=419 (was 419), ProcessCount=174 (was 174), AvailableMemoryMB=3470 (was 3489) 2023-07-21 05:14:49,839 WARN [Listener at localhost/37815] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-21 05:14:49,839 INFO [Listener at localhost/37815] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 05:14:49,839 INFO [Listener at localhost/37815] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 05:14:49,839 DEBUG [Listener at localhost/37815] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5fb63931 to 127.0.0.1:53364 2023-07-21 05:14:49,839 DEBUG [Listener at localhost/37815] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:49,839 DEBUG [Listener at localhost/37815] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 05:14:49,840 DEBUG [Listener at localhost/37815] util.JVMClusterUtil(257): Found active master hash=908711051, stopped=false 2023-07-21 05:14:49,840 DEBUG [Listener at localhost/37815] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 05:14:49,840 DEBUG [Listener at localhost/37815] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 05:14:49,840 INFO [Listener at localhost/37815] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,41465,1689916486204 2023-07-21 05:14:49,841 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:49,841 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:49,841 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:49,842 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:49,842 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:38681-0x101864db52b000b, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:49,842 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:49,842 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:49,842 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 05:14:49,842 INFO [Listener at localhost/37815] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 05:14:49,842 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:49,843 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38681-0x101864db52b000b, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:49,843 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 05:14:49,843 DEBUG [Listener at localhost/37815] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7dd65c46 to 127.0.0.1:53364 2023-07-21 05:14:49,843 DEBUG [Listener at localhost/37815] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:49,843 INFO [Listener at localhost/37815] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37839,1689916486265' ***** 2023-07-21 05:14:49,843 INFO [Listener at localhost/37815] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 05:14:49,843 INFO [Listener at localhost/37815] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36711,1689916486312' ***** 2023-07-21 05:14:49,843 INFO [Listener at localhost/37815] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 05:14:49,843 INFO [Listener at localhost/37815] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36007,1689916486355' ***** 2023-07-21 05:14:49,844 INFO [Listener at localhost/37815] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 05:14:49,844 INFO [Listener at localhost/37815] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38681,1689916487640' ***** 2023-07-21 05:14:49,844 INFO [Listener at localhost/37815] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 05:14:49,844 INFO [RS:2;jenkins-hbase4:36007] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 05:14:49,844 INFO [RS:1;jenkins-hbase4:36711] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 05:14:49,844 INFO [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 05:14:49,844 INFO [RS:3;jenkins-hbase4:38681] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 05:14:49,849 INFO [RS:3;jenkins-hbase4:38681] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3b5847ef{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:49,849 INFO [RS:2;jenkins-hbase4:36007] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@373f217{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:49,849 INFO [RS:0;jenkins-hbase4:37839] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3119830f{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:49,849 INFO [RS:1;jenkins-hbase4:36711] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4e6eccad{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-21 05:14:49,851 INFO [RS:3;jenkins-hbase4:38681] server.AbstractConnector(383): Stopped ServerConnector@307a6dc1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 05:14:49,851 INFO [RS:3;jenkins-hbase4:38681] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 05:14:49,851 INFO [RS:1;jenkins-hbase4:36711] server.AbstractConnector(383): Stopped ServerConnector@28b0b21b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 05:14:49,851 INFO [RS:0;jenkins-hbase4:37839] server.AbstractConnector(383): Stopped ServerConnector@5a351aef{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 05:14:49,851 INFO [RS:2;jenkins-hbase4:36007] server.AbstractConnector(383): Stopped ServerConnector@46a2470b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 05:14:49,851 INFO [RS:0;jenkins-hbase4:37839] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 05:14:49,851 INFO [RS:3;jenkins-hbase4:38681] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@52f634f0{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-21 05:14:49,853 INFO [RS:0;jenkins-hbase4:37839] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2f5a8a4b{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-21 05:14:49,851 INFO [RS:1;jenkins-hbase4:36711] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 05:14:49,854 INFO [RS:0;jenkins-hbase4:37839] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1ee8cb98{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/hadoop.log.dir/,STOPPED} 2023-07-21 05:14:49,853 INFO [RS:3;jenkins-hbase4:38681] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@78bcc1ef{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/hadoop.log.dir/,STOPPED} 2023-07-21 05:14:49,852 INFO [RS:2;jenkins-hbase4:36007] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 05:14:49,855 INFO [RS:1;jenkins-hbase4:36711] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@59d4cd27{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-21 05:14:49,856 INFO [RS:3;jenkins-hbase4:38681] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 05:14:49,856 INFO [RS:3;jenkins-hbase4:38681] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 05:14:49,856 INFO [RS:3;jenkins-hbase4:38681] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 05:14:49,856 INFO [RS:0;jenkins-hbase4:37839] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 05:14:49,856 INFO [RS:3;jenkins-hbase4:38681] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:49,856 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 05:14:49,856 DEBUG [RS:3;jenkins-hbase4:38681] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6fd1c6d6 to 127.0.0.1:53364 2023-07-21 05:14:49,856 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 05:14:49,856 DEBUG [RS:3;jenkins-hbase4:38681] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:49,856 INFO [RS:0;jenkins-hbase4:37839] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 05:14:49,856 INFO [RS:3;jenkins-hbase4:38681] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38681,1689916487640; all regions closed. 2023-07-21 05:14:49,856 INFO [RS:0;jenkins-hbase4:37839] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 05:14:49,856 INFO [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer(3305): Received CLOSE for c445ace1c5aabcf0de02aaf524278130 2023-07-21 05:14:49,862 INFO [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer(3305): Received CLOSE for 4a7f959cb0cb6de634b4673eb2e845c3 2023-07-21 05:14:49,863 INFO [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:49,863 INFO [RS:2;jenkins-hbase4:36007] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@9f1cc1a{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-21 05:14:49,863 INFO [RS:1;jenkins-hbase4:36711] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2bc92e96{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/hadoop.log.dir/,STOPPED} 2023-07-21 05:14:49,863 DEBUG [RS:0;jenkins-hbase4:37839] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5ce75a9c to 127.0.0.1:53364 2023-07-21 05:14:49,864 DEBUG [RS:0;jenkins-hbase4:37839] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:49,864 INFO [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-21 05:14:49,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c445ace1c5aabcf0de02aaf524278130, disabling compactions & flushes 2023-07-21 05:14:49,864 INFO [RS:2;jenkins-hbase4:36007] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1ec14bc3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/hadoop.log.dir/,STOPPED} 2023-07-21 05:14:49,865 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130. 2023-07-21 05:14:49,865 INFO [RS:1;jenkins-hbase4:36711] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 05:14:49,865 INFO [RS:2;jenkins-hbase4:36007] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 05:14:49,865 INFO [RS:1;jenkins-hbase4:36711] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 05:14:49,865 INFO [RS:1;jenkins-hbase4:36711] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 05:14:49,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130. 2023-07-21 05:14:49,865 DEBUG [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer(1478): Online Regions={c445ace1c5aabcf0de02aaf524278130=hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130., 4a7f959cb0cb6de634b4673eb2e845c3=hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3.} 2023-07-21 05:14:49,866 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130. after waiting 0 ms 2023-07-21 05:14:49,866 INFO [RS:1;jenkins-hbase4:36711] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36711,1689916486312 2023-07-21 05:14:49,865 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 05:14:49,866 DEBUG [RS:1;jenkins-hbase4:36711] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0121ab8d to 127.0.0.1:53364 2023-07-21 05:14:49,866 DEBUG [RS:1;jenkins-hbase4:36711] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:49,866 INFO [RS:1;jenkins-hbase4:36711] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36711,1689916486312; all regions closed. 2023-07-21 05:14:49,865 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 05:14:49,865 INFO [RS:2;jenkins-hbase4:36007] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 05:14:49,866 DEBUG [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer(1504): Waiting on 4a7f959cb0cb6de634b4673eb2e845c3, c445ace1c5aabcf0de02aaf524278130 2023-07-21 05:14:49,866 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130. 2023-07-21 05:14:49,872 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c445ace1c5aabcf0de02aaf524278130 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-21 05:14:49,872 INFO [RS:2;jenkins-hbase4:36007] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 05:14:49,872 INFO [RS:2;jenkins-hbase4:36007] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36007,1689916486355 2023-07-21 05:14:49,872 DEBUG [RS:2;jenkins-hbase4:36007] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3a3e2c2d to 127.0.0.1:53364 2023-07-21 05:14:49,873 DEBUG [RS:2;jenkins-hbase4:36007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:49,873 INFO [RS:2;jenkins-hbase4:36007] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 05:14:49,873 INFO [RS:2;jenkins-hbase4:36007] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 05:14:49,873 INFO [RS:2;jenkins-hbase4:36007] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 05:14:49,873 INFO [RS:2;jenkins-hbase4:36007] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 05:14:49,873 DEBUG [RS:3;jenkins-hbase4:38681] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/oldWALs 2023-07-21 05:14:49,873 INFO [RS:3;jenkins-hbase4:38681] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38681%2C1689916487640:(num 1689916487849) 2023-07-21 05:14:49,873 DEBUG [RS:3;jenkins-hbase4:38681] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:49,873 INFO [RS:3;jenkins-hbase4:38681] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:49,874 INFO [RS:2;jenkins-hbase4:36007] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 05:14:49,875 DEBUG [RS:2;jenkins-hbase4:36007] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-21 05:14:49,875 DEBUG [RS:2;jenkins-hbase4:36007] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-21 05:14:49,875 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 05:14:49,876 INFO [RS:3;jenkins-hbase4:38681] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 05:14:49,876 INFO [RS:3;jenkins-hbase4:38681] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 05:14:49,876 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 05:14:49,876 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 05:14:49,876 INFO [RS:3;jenkins-hbase4:38681] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 05:14:49,876 INFO [RS:3;jenkins-hbase4:38681] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 05:14:49,876 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 05:14:49,876 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 05:14:49,876 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 05:14:49,877 INFO [RS:3;jenkins-hbase4:38681] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38681 2023-07-21 05:14:49,878 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-21 05:14:49,879 DEBUG [RS:1;jenkins-hbase4:36711] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/oldWALs 2023-07-21 05:14:49,879 INFO [RS:1;jenkins-hbase4:36711] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36711%2C1689916486312:(num 1689916486818) 2023-07-21 05:14:49,879 DEBUG [RS:1;jenkins-hbase4:36711] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:49,879 INFO [RS:1;jenkins-hbase4:36711] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:49,880 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:49,880 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:38681-0x101864db52b000b, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:49,880 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:49,880 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38681,1689916487640 2023-07-21 05:14:49,880 INFO [RS:1;jenkins-hbase4:36711] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 05:14:49,880 INFO [RS:1;jenkins-hbase4:36711] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 05:14:49,880 INFO [RS:1;jenkins-hbase4:36711] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 05:14:49,880 INFO [RS:1;jenkins-hbase4:36711] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 05:14:49,880 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:49,880 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:49,880 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:49,881 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:49,881 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 05:14:49,881 INFO [RS:1;jenkins-hbase4:36711] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36711 2023-07-21 05:14:49,880 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:38681-0x101864db52b000b, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:49,883 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38681,1689916487640] 2023-07-21 05:14:49,883 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38681,1689916487640; numProcessing=1 2023-07-21 05:14:49,883 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36711,1689916486312 2023-07-21 05:14:49,883 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36711,1689916486312 2023-07-21 05:14:49,884 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36711,1689916486312 2023-07-21 05:14:49,883 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:49,885 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:49,890 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/namespace/c445ace1c5aabcf0de02aaf524278130/.tmp/info/25736e68390343619533bf83536e3497 2023-07-21 05:14:49,892 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:49,896 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 25736e68390343619533bf83536e3497 2023-07-21 05:14:49,896 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:49,897 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/.tmp/info/e969ff5fcb844fb08ba3b477f8eae12a 2023-07-21 05:14:49,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/namespace/c445ace1c5aabcf0de02aaf524278130/.tmp/info/25736e68390343619533bf83536e3497 as hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/namespace/c445ace1c5aabcf0de02aaf524278130/info/25736e68390343619533bf83536e3497 2023-07-21 05:14:49,902 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e969ff5fcb844fb08ba3b477f8eae12a 2023-07-21 05:14:49,902 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 25736e68390343619533bf83536e3497 2023-07-21 05:14:49,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/namespace/c445ace1c5aabcf0de02aaf524278130/info/25736e68390343619533bf83536e3497, entries=3, sequenceid=9, filesize=5.0 K 2023-07-21 05:14:49,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for c445ace1c5aabcf0de02aaf524278130 in 31ms, sequenceid=9, compaction requested=false 2023-07-21 05:14:49,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/namespace/c445ace1c5aabcf0de02aaf524278130/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-21 05:14:49,910 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130. 2023-07-21 05:14:49,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c445ace1c5aabcf0de02aaf524278130: 2023-07-21 05:14:49,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689916487044.c445ace1c5aabcf0de02aaf524278130. 2023-07-21 05:14:49,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4a7f959cb0cb6de634b4673eb2e845c3, disabling compactions & flushes 2023-07-21 05:14:49,910 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3. 2023-07-21 05:14:49,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3. 2023-07-21 05:14:49,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3. after waiting 0 ms 2023-07-21 05:14:49,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3. 2023-07-21 05:14:49,910 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 4a7f959cb0cb6de634b4673eb2e845c3 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-21 05:14:49,915 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/.tmp/rep_barrier/6d379c2820a440d489f59ff6ac5c946c 2023-07-21 05:14:49,920 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6d379c2820a440d489f59ff6ac5c946c 2023-07-21 05:14:49,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/rsgroup/4a7f959cb0cb6de634b4673eb2e845c3/.tmp/m/426d8fcc6839480ab1e113ec07edadb6 2023-07-21 05:14:49,925 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:49,926 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 426d8fcc6839480ab1e113ec07edadb6 2023-07-21 05:14:49,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/rsgroup/4a7f959cb0cb6de634b4673eb2e845c3/.tmp/m/426d8fcc6839480ab1e113ec07edadb6 as hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/rsgroup/4a7f959cb0cb6de634b4673eb2e845c3/m/426d8fcc6839480ab1e113ec07edadb6 2023-07-21 05:14:49,933 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 426d8fcc6839480ab1e113ec07edadb6 2023-07-21 05:14:49,933 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/rsgroup/4a7f959cb0cb6de634b4673eb2e845c3/m/426d8fcc6839480ab1e113ec07edadb6, entries=12, sequenceid=29, filesize=5.4 K 2023-07-21 05:14:49,934 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 4a7f959cb0cb6de634b4673eb2e845c3 in 24ms, sequenceid=29, compaction requested=false 2023-07-21 05:14:49,937 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/.tmp/table/3f2c4c34e7be46e1a958af1c86f03070 2023-07-21 05:14:49,939 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/rsgroup/4a7f959cb0cb6de634b4673eb2e845c3/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-21 05:14:49,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 05:14:49,940 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3. 2023-07-21 05:14:49,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4a7f959cb0cb6de634b4673eb2e845c3: 2023-07-21 05:14:49,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689916487160.4a7f959cb0cb6de634b4673eb2e845c3. 2023-07-21 05:14:49,942 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3f2c4c34e7be46e1a958af1c86f03070 2023-07-21 05:14:49,943 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/.tmp/info/e969ff5fcb844fb08ba3b477f8eae12a as hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/info/e969ff5fcb844fb08ba3b477f8eae12a 2023-07-21 05:14:49,948 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e969ff5fcb844fb08ba3b477f8eae12a 2023-07-21 05:14:49,948 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/info/e969ff5fcb844fb08ba3b477f8eae12a, entries=22, sequenceid=26, filesize=7.3 K 2023-07-21 05:14:49,949 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/.tmp/rep_barrier/6d379c2820a440d489f59ff6ac5c946c as hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/rep_barrier/6d379c2820a440d489f59ff6ac5c946c 2023-07-21 05:14:49,954 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6d379c2820a440d489f59ff6ac5c946c 2023-07-21 05:14:49,954 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/rep_barrier/6d379c2820a440d489f59ff6ac5c946c, entries=1, sequenceid=26, filesize=4.9 K 2023-07-21 05:14:49,955 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/.tmp/table/3f2c4c34e7be46e1a958af1c86f03070 as hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/table/3f2c4c34e7be46e1a958af1c86f03070 2023-07-21 05:14:49,960 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3f2c4c34e7be46e1a958af1c86f03070 2023-07-21 05:14:49,960 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/table/3f2c4c34e7be46e1a958af1c86f03070, entries=6, sequenceid=26, filesize=5.1 K 2023-07-21 05:14:49,961 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 85ms, sequenceid=26, compaction requested=false 2023-07-21 05:14:49,969 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-21 05:14:49,970 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 05:14:49,971 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 05:14:49,971 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 05:14:49,971 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 05:14:49,983 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:38681-0x101864db52b000b, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:49,983 INFO [RS:3;jenkins-hbase4:38681] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38681,1689916487640; zookeeper connection closed. 2023-07-21 05:14:49,983 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:38681-0x101864db52b000b, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:49,983 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@60ddaa90] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@60ddaa90 2023-07-21 05:14:49,986 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38681,1689916487640 already deleted, retry=false 2023-07-21 05:14:49,986 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38681,1689916487640 expired; onlineServers=3 2023-07-21 05:14:49,986 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36711,1689916486312] 2023-07-21 05:14:49,986 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36711,1689916486312; numProcessing=2 2023-07-21 05:14:49,987 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36711,1689916486312 already deleted, retry=false 2023-07-21 05:14:49,987 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36711,1689916486312 expired; onlineServers=2 2023-07-21 05:14:50,072 INFO [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37839,1689916486265; all regions closed. 2023-07-21 05:14:50,075 INFO [RS:2;jenkins-hbase4:36007] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36007,1689916486355; all regions closed. 2023-07-21 05:14:50,081 DEBUG [RS:0;jenkins-hbase4:37839] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/oldWALs 2023-07-21 05:14:50,081 INFO [RS:0;jenkins-hbase4:37839] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37839%2C1689916486265:(num 1689916486810) 2023-07-21 05:14:50,081 DEBUG [RS:0;jenkins-hbase4:37839] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:50,082 INFO [RS:0;jenkins-hbase4:37839] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:50,082 INFO [RS:0;jenkins-hbase4:37839] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 05:14:50,082 INFO [RS:0;jenkins-hbase4:37839] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 05:14:50,082 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 05:14:50,082 INFO [RS:0;jenkins-hbase4:37839] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 05:14:50,082 INFO [RS:0;jenkins-hbase4:37839] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 05:14:50,087 INFO [RS:0;jenkins-hbase4:37839] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37839 2023-07-21 05:14:50,090 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:50,090 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:50,090 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37839,1689916486265 2023-07-21 05:14:50,091 DEBUG [RS:2;jenkins-hbase4:36007] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/oldWALs 2023-07-21 05:14:50,091 INFO [RS:2;jenkins-hbase4:36007] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36007%2C1689916486355.meta:.meta(num 1689916486963) 2023-07-21 05:14:50,092 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37839,1689916486265] 2023-07-21 05:14:50,092 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37839,1689916486265; numProcessing=3 2023-07-21 05:14:50,097 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37839,1689916486265 already deleted, retry=false 2023-07-21 05:14:50,097 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37839,1689916486265 expired; onlineServers=1 2023-07-21 05:14:50,101 DEBUG [RS:2;jenkins-hbase4:36007] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/oldWALs 2023-07-21 05:14:50,101 INFO [RS:2;jenkins-hbase4:36007] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36007%2C1689916486355:(num 1689916486818) 2023-07-21 05:14:50,101 DEBUG [RS:2;jenkins-hbase4:36007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:50,101 INFO [RS:2;jenkins-hbase4:36007] regionserver.LeaseManager(133): Closed leases 2023-07-21 05:14:50,101 INFO [RS:2;jenkins-hbase4:36007] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 05:14:50,101 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 05:14:50,102 INFO [RS:2;jenkins-hbase4:36007] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36007 2023-07-21 05:14:50,104 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36007,1689916486355 2023-07-21 05:14:50,104 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 05:14:50,105 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36007,1689916486355] 2023-07-21 05:14:50,105 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36007,1689916486355; numProcessing=4 2023-07-21 05:14:50,106 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36007,1689916486355 already deleted, retry=false 2023-07-21 05:14:50,106 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36007,1689916486355 expired; onlineServers=0 2023-07-21 05:14:50,106 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41465,1689916486204' ***** 2023-07-21 05:14:50,106 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 05:14:50,107 DEBUG [M:0;jenkins-hbase4:41465] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@32165696, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 05:14:50,107 INFO [M:0;jenkins-hbase4:41465] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 05:14:50,109 INFO [M:0;jenkins-hbase4:41465] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7582803a{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-21 05:14:50,110 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 05:14:50,110 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 05:14:50,110 INFO [M:0;jenkins-hbase4:41465] server.AbstractConnector(383): Stopped ServerConnector@17bb529f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 05:14:50,110 INFO [M:0;jenkins-hbase4:41465] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 05:14:50,110 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 05:14:50,111 INFO [M:0;jenkins-hbase4:41465] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@66ba58da{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-21 05:14:50,111 INFO [M:0;jenkins-hbase4:41465] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@26442371{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/hadoop.log.dir/,STOPPED} 2023-07-21 05:14:50,112 INFO [M:0;jenkins-hbase4:41465] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41465,1689916486204 2023-07-21 05:14:50,112 INFO [M:0;jenkins-hbase4:41465] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41465,1689916486204; all regions closed. 2023-07-21 05:14:50,112 DEBUG [M:0;jenkins-hbase4:41465] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 05:14:50,112 INFO [M:0;jenkins-hbase4:41465] master.HMaster(1491): Stopping master jetty server 2023-07-21 05:14:50,112 INFO [M:0;jenkins-hbase4:41465] server.AbstractConnector(383): Stopped ServerConnector@7a6873aa{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 05:14:50,113 DEBUG [M:0;jenkins-hbase4:41465] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 05:14:50,113 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 05:14:50,113 DEBUG [M:0;jenkins-hbase4:41465] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 05:14:50,113 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689916486559] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689916486559,5,FailOnTimeoutGroup] 2023-07-21 05:14:50,113 INFO [M:0;jenkins-hbase4:41465] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 05:14:50,113 INFO [M:0;jenkins-hbase4:41465] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 05:14:50,113 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689916486562] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689916486562,5,FailOnTimeoutGroup] 2023-07-21 05:14:50,113 INFO [M:0;jenkins-hbase4:41465] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-21 05:14:50,113 DEBUG [M:0;jenkins-hbase4:41465] master.HMaster(1512): Stopping service threads 2023-07-21 05:14:50,113 INFO [M:0;jenkins-hbase4:41465] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 05:14:50,113 ERROR [M:0;jenkins-hbase4:41465] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-21 05:14:50,113 INFO [M:0;jenkins-hbase4:41465] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 05:14:50,114 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 05:14:50,114 DEBUG [M:0;jenkins-hbase4:41465] zookeeper.ZKUtil(398): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 05:14:50,114 WARN [M:0;jenkins-hbase4:41465] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 05:14:50,114 INFO [M:0;jenkins-hbase4:41465] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 05:14:50,114 INFO [M:0;jenkins-hbase4:41465] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 05:14:50,114 DEBUG [M:0;jenkins-hbase4:41465] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 05:14:50,114 INFO [M:0;jenkins-hbase4:41465] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:50,114 DEBUG [M:0;jenkins-hbase4:41465] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:50,114 DEBUG [M:0;jenkins-hbase4:41465] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 05:14:50,114 DEBUG [M:0;jenkins-hbase4:41465] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:50,114 INFO [M:0;jenkins-hbase4:41465] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.24 KB heapSize=90.66 KB 2023-07-21 05:14:50,128 INFO [M:0;jenkins-hbase4:41465] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.24 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/684a20a30cf14cd68208a97a8765fcd9 2023-07-21 05:14:50,134 DEBUG [M:0;jenkins-hbase4:41465] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/684a20a30cf14cd68208a97a8765fcd9 as hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/684a20a30cf14cd68208a97a8765fcd9 2023-07-21 05:14:50,139 INFO [M:0;jenkins-hbase4:41465] regionserver.HStore(1080): Added hdfs://localhost:35849/user/jenkins/test-data/03700488-e917-8b88-3ad8-00501ed5433f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/684a20a30cf14cd68208a97a8765fcd9, entries=22, sequenceid=175, filesize=11.1 K 2023-07-21 05:14:50,140 INFO [M:0;jenkins-hbase4:41465] regionserver.HRegion(2948): Finished flush of dataSize ~76.24 KB/78067, heapSize ~90.65 KB/92824, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 26ms, sequenceid=175, compaction requested=false 2023-07-21 05:14:50,142 INFO [M:0;jenkins-hbase4:41465] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 05:14:50,142 DEBUG [M:0;jenkins-hbase4:41465] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 05:14:50,145 INFO [M:0;jenkins-hbase4:41465] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 05:14:50,145 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 05:14:50,146 INFO [M:0;jenkins-hbase4:41465] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41465 2023-07-21 05:14:50,147 DEBUG [M:0;jenkins-hbase4:41465] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,41465,1689916486204 already deleted, retry=false 2023-07-21 05:14:50,543 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:50,543 INFO [M:0;jenkins-hbase4:41465] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41465,1689916486204; zookeeper connection closed. 2023-07-21 05:14:50,543 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): master:41465-0x101864db52b0000, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:50,643 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:50,643 INFO [RS:2;jenkins-hbase4:36007] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36007,1689916486355; zookeeper connection closed. 2023-07-21 05:14:50,643 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36007-0x101864db52b0003, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:50,643 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@32a76734] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@32a76734 2023-07-21 05:14:50,743 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:50,743 INFO [RS:0;jenkins-hbase4:37839] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37839,1689916486265; zookeeper connection closed. 2023-07-21 05:14:50,743 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:37839-0x101864db52b0001, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:50,744 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@15f6301c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@15f6301c 2023-07-21 05:14:50,843 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:50,844 DEBUG [Listener at localhost/37815-EventThread] zookeeper.ZKWatcher(600): regionserver:36711-0x101864db52b0002, quorum=127.0.0.1:53364, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 05:14:50,844 INFO [RS:1;jenkins-hbase4:36711] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36711,1689916486312; zookeeper connection closed. 2023-07-21 05:14:50,844 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@294ff731] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@294ff731 2023-07-21 05:14:50,844 INFO [Listener at localhost/37815] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-21 05:14:50,844 WARN [Listener at localhost/37815] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 05:14:50,848 INFO [Listener at localhost/37815] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 05:14:50,951 WARN [BP-1240127492-172.31.14.131-1689916485425 heartbeating to localhost/127.0.0.1:35849] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 05:14:50,951 WARN [BP-1240127492-172.31.14.131-1689916485425 heartbeating to localhost/127.0.0.1:35849] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1240127492-172.31.14.131-1689916485425 (Datanode Uuid f15140a7-f1a3-48f1-9fe1-47efe4e38024) service to localhost/127.0.0.1:35849 2023-07-21 05:14:50,951 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data5/current/BP-1240127492-172.31.14.131-1689916485425] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:50,952 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data6/current/BP-1240127492-172.31.14.131-1689916485425] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:50,953 WARN [Listener at localhost/37815] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 05:14:50,956 INFO [Listener at localhost/37815] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 05:14:51,060 WARN [BP-1240127492-172.31.14.131-1689916485425 heartbeating to localhost/127.0.0.1:35849] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 05:14:51,060 WARN [BP-1240127492-172.31.14.131-1689916485425 heartbeating to localhost/127.0.0.1:35849] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1240127492-172.31.14.131-1689916485425 (Datanode Uuid 4efdc82f-9fce-4092-a75a-6319dec85e27) service to localhost/127.0.0.1:35849 2023-07-21 05:14:51,061 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data3/current/BP-1240127492-172.31.14.131-1689916485425] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:51,061 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data4/current/BP-1240127492-172.31.14.131-1689916485425] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:51,062 WARN [Listener at localhost/37815] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 05:14:51,065 INFO [Listener at localhost/37815] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 05:14:51,168 WARN [BP-1240127492-172.31.14.131-1689916485425 heartbeating to localhost/127.0.0.1:35849] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 05:14:51,168 WARN [BP-1240127492-172.31.14.131-1689916485425 heartbeating to localhost/127.0.0.1:35849] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1240127492-172.31.14.131-1689916485425 (Datanode Uuid 4108575c-63fe-404e-ae9f-65cacfd20645) service to localhost/127.0.0.1:35849 2023-07-21 05:14:51,169 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data1/current/BP-1240127492-172.31.14.131-1689916485425] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:51,169 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c4b46489-f5bd-f317-04fc-6a74f9b5679c/cluster_689df1dd-b752-5929-5ad7-43423a0abb81/dfs/data/data2/current/BP-1240127492-172.31.14.131-1689916485425] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 05:14:51,179 INFO [Listener at localhost/37815] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 05:14:51,294 INFO [Listener at localhost/37815] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 05:14:51,325 INFO [Listener at localhost/37815] hbase.HBaseTestingUtility(1293): Minicluster is down