2023-07-15 13:15:19,075 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad 2023-07-15 13:15:19,096 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-15 13:15:19,117 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-15 13:15:19,118 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/cluster_545a304e-a9e7-1a1d-1dcd-66099efd16d1, deleteOnExit=true 2023-07-15 13:15:19,118 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-15 13:15:19,119 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/test.cache.data in system properties and HBase conf 2023-07-15 13:15:19,119 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/hadoop.tmp.dir in system properties and HBase conf 2023-07-15 13:15:19,120 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/hadoop.log.dir in system properties and HBase conf 2023-07-15 13:15:19,122 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-15 13:15:19,122 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-15 13:15:19,123 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-15 13:15:19,275 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-15 13:15:19,726 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-15 13:15:19,733 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-15 13:15:19,733 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-15 13:15:19,734 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-15 13:15:19,734 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-15 13:15:19,735 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-15 13:15:19,735 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-15 13:15:19,736 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-15 13:15:19,736 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-15 13:15:19,737 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-15 13:15:19,737 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/nfs.dump.dir in system properties and HBase conf 2023-07-15 13:15:19,737 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/java.io.tmpdir in system properties and HBase conf 2023-07-15 13:15:19,738 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-15 13:15:19,738 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-15 13:15:19,739 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-15 13:15:20,328 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-15 13:15:20,332 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-15 13:15:20,608 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-15 13:15:20,781 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-15 13:15:20,796 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 13:15:20,838 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 13:15:20,871 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/java.io.tmpdir/Jetty_localhost_43027_hdfs____.mq1gow/webapp 2023-07-15 13:15:21,025 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43027 2023-07-15 13:15:21,036 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-15 13:15:21,036 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-15 13:15:21,534 WARN [Listener at localhost/42517] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 13:15:21,687 WARN [Listener at localhost/42517] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 13:15:21,706 WARN [Listener at localhost/42517] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 13:15:21,713 INFO [Listener at localhost/42517] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 13:15:21,719 INFO [Listener at localhost/42517] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/java.io.tmpdir/Jetty_localhost_46629_datanode____.jcnd5q/webapp 2023-07-15 13:15:21,822 INFO [Listener at localhost/42517] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46629 2023-07-15 13:15:22,214 WARN [Listener at localhost/34867] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 13:15:22,232 WARN [Listener at localhost/34867] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 13:15:22,235 WARN [Listener at localhost/34867] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 13:15:22,238 INFO [Listener at localhost/34867] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 13:15:22,263 INFO [Listener at localhost/34867] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/java.io.tmpdir/Jetty_localhost_45755_datanode____8cm671/webapp 2023-07-15 13:15:22,378 INFO [Listener at localhost/34867] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45755 2023-07-15 13:15:22,400 WARN [Listener at localhost/41829] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 13:15:22,439 WARN [Listener at localhost/41829] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 13:15:22,442 WARN [Listener at localhost/41829] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 13:15:22,445 INFO [Listener at localhost/41829] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 13:15:22,455 INFO [Listener at localhost/41829] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/java.io.tmpdir/Jetty_localhost_40677_datanode____.sic8v/webapp 2023-07-15 13:15:22,577 INFO [Listener at localhost/41829] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40677 2023-07-15 13:15:22,590 WARN [Listener at localhost/38739] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 13:15:22,855 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x39128696917b0876: Processing first storage report for DS-0f3a48f8-8cb0-454b-850a-8844cb779b84 from datanode 5b565817-c1a0-4dde-bff3-3b0b4d751122 2023-07-15 13:15:22,857 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x39128696917b0876: from storage DS-0f3a48f8-8cb0-454b-850a-8844cb779b84 node DatanodeRegistration(127.0.0.1:43995, datanodeUuid=5b565817-c1a0-4dde-bff3-3b0b4d751122, infoPort=44093, infoSecurePort=0, ipcPort=41829, storageInfo=lv=-57;cid=testClusterID;nsid=230323452;c=1689426920401), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-15 13:15:22,857 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x63e7bd5142aa2464: Processing first storage report for DS-834fac10-3713-40cb-b8d3-78d39d80cf56 from datanode 8f649353-adf6-4753-909f-1c23368d8c9e 2023-07-15 13:15:22,857 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x63e7bd5142aa2464: from storage DS-834fac10-3713-40cb-b8d3-78d39d80cf56 node DatanodeRegistration(127.0.0.1:33833, datanodeUuid=8f649353-adf6-4753-909f-1c23368d8c9e, infoPort=38495, infoSecurePort=0, ipcPort=38739, storageInfo=lv=-57;cid=testClusterID;nsid=230323452;c=1689426920401), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-15 13:15:22,858 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3c23172cac8ed828: Processing first storage report for DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35 from datanode 1b1a8ca7-71ca-4e9a-aa52-f850b969373a 2023-07-15 13:15:22,858 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3c23172cac8ed828: from storage DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35 node DatanodeRegistration(127.0.0.1:39307, datanodeUuid=1b1a8ca7-71ca-4e9a-aa52-f850b969373a, infoPort=44193, infoSecurePort=0, ipcPort=34867, storageInfo=lv=-57;cid=testClusterID;nsid=230323452;c=1689426920401), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 13:15:22,858 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x39128696917b0876: Processing first storage report for DS-d84353cc-a22a-4287-98fc-c9dea2194ba0 from datanode 5b565817-c1a0-4dde-bff3-3b0b4d751122 2023-07-15 13:15:22,858 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x39128696917b0876: from storage DS-d84353cc-a22a-4287-98fc-c9dea2194ba0 node DatanodeRegistration(127.0.0.1:43995, datanodeUuid=5b565817-c1a0-4dde-bff3-3b0b4d751122, infoPort=44093, infoSecurePort=0, ipcPort=41829, storageInfo=lv=-57;cid=testClusterID;nsid=230323452;c=1689426920401), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 13:15:22,859 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x63e7bd5142aa2464: Processing first storage report for DS-207b416f-2956-4a3e-84c1-433d11597445 from datanode 8f649353-adf6-4753-909f-1c23368d8c9e 2023-07-15 13:15:22,859 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x63e7bd5142aa2464: from storage DS-207b416f-2956-4a3e-84c1-433d11597445 node DatanodeRegistration(127.0.0.1:33833, datanodeUuid=8f649353-adf6-4753-909f-1c23368d8c9e, infoPort=38495, infoSecurePort=0, ipcPort=38739, storageInfo=lv=-57;cid=testClusterID;nsid=230323452;c=1689426920401), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 13:15:22,859 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3c23172cac8ed828: Processing first storage report for DS-5bc48cf2-c88c-4baa-82f4-9fdb621de41f from datanode 1b1a8ca7-71ca-4e9a-aa52-f850b969373a 2023-07-15 13:15:22,859 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3c23172cac8ed828: from storage DS-5bc48cf2-c88c-4baa-82f4-9fdb621de41f node DatanodeRegistration(127.0.0.1:39307, datanodeUuid=1b1a8ca7-71ca-4e9a-aa52-f850b969373a, infoPort=44193, infoSecurePort=0, ipcPort=34867, storageInfo=lv=-57;cid=testClusterID;nsid=230323452;c=1689426920401), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 13:15:23,020 DEBUG [Listener at localhost/38739] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad 2023-07-15 13:15:23,099 INFO [Listener at localhost/38739] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/cluster_545a304e-a9e7-1a1d-1dcd-66099efd16d1/zookeeper_0, clientPort=54157, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/cluster_545a304e-a9e7-1a1d-1dcd-66099efd16d1/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/cluster_545a304e-a9e7-1a1d-1dcd-66099efd16d1/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-15 13:15:23,115 INFO [Listener at localhost/38739] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54157 2023-07-15 13:15:23,126 INFO [Listener at localhost/38739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:23,128 INFO [Listener at localhost/38739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:23,839 INFO [Listener at localhost/38739] util.FSUtils(471): Created version file at hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe with version=8 2023-07-15 13:15:23,840 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/hbase-staging 2023-07-15 13:15:23,848 DEBUG [Listener at localhost/38739] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-15 13:15:23,849 DEBUG [Listener at localhost/38739] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-15 13:15:23,849 DEBUG [Listener at localhost/38739] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-15 13:15:23,849 DEBUG [Listener at localhost/38739] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-15 13:15:24,221 INFO [Listener at localhost/38739] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-15 13:15:24,808 INFO [Listener at localhost/38739] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 13:15:24,850 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:24,850 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:24,851 INFO [Listener at localhost/38739] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 13:15:24,851 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:24,851 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 13:15:25,012 INFO [Listener at localhost/38739] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 13:15:25,093 DEBUG [Listener at localhost/38739] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-15 13:15:25,198 INFO [Listener at localhost/38739] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40693 2023-07-15 13:15:25,209 INFO [Listener at localhost/38739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:25,211 INFO [Listener at localhost/38739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:25,236 INFO [Listener at localhost/38739] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40693 connecting to ZooKeeper ensemble=127.0.0.1:54157 2023-07-15 13:15:25,281 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:406930x0, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 13:15:25,299 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40693-0x101691f914d0000 connected 2023-07-15 13:15:25,333 DEBUG [Listener at localhost/38739] zookeeper.ZKUtil(164): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 13:15:25,334 DEBUG [Listener at localhost/38739] zookeeper.ZKUtil(164): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:15:25,339 DEBUG [Listener at localhost/38739] zookeeper.ZKUtil(164): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 13:15:25,361 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40693 2023-07-15 13:15:25,361 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40693 2023-07-15 13:15:25,365 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40693 2023-07-15 13:15:25,368 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40693 2023-07-15 13:15:25,368 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40693 2023-07-15 13:15:25,412 INFO [Listener at localhost/38739] log.Log(170): Logging initialized @7238ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-15 13:15:25,576 INFO [Listener at localhost/38739] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 13:15:25,577 INFO [Listener at localhost/38739] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 13:15:25,577 INFO [Listener at localhost/38739] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 13:15:25,579 INFO [Listener at localhost/38739] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-15 13:15:25,580 INFO [Listener at localhost/38739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 13:15:25,580 INFO [Listener at localhost/38739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 13:15:25,583 INFO [Listener at localhost/38739] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 13:15:25,657 INFO [Listener at localhost/38739] http.HttpServer(1146): Jetty bound to port 45689 2023-07-15 13:15:25,659 INFO [Listener at localhost/38739] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 13:15:25,697 INFO [Listener at localhost/38739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:25,700 INFO [Listener at localhost/38739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@128132e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/hadoop.log.dir/,AVAILABLE} 2023-07-15 13:15:25,701 INFO [Listener at localhost/38739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:25,701 INFO [Listener at localhost/38739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@577a3a17{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-15 13:15:25,923 INFO [Listener at localhost/38739] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 13:15:25,943 INFO [Listener at localhost/38739] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 13:15:25,943 INFO [Listener at localhost/38739] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 13:15:25,946 INFO [Listener at localhost/38739] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-15 13:15:25,953 INFO [Listener at localhost/38739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:25,981 INFO [Listener at localhost/38739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@664114d7{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/java.io.tmpdir/jetty-0_0_0_0-45689-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7385033878606735707/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-15 13:15:25,994 INFO [Listener at localhost/38739] server.AbstractConnector(333): Started ServerConnector@40debc20{HTTP/1.1, (http/1.1)}{0.0.0.0:45689} 2023-07-15 13:15:25,994 INFO [Listener at localhost/38739] server.Server(415): Started @7821ms 2023-07-15 13:15:25,999 INFO [Listener at localhost/38739] master.HMaster(444): hbase.rootdir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe, hbase.cluster.distributed=false 2023-07-15 13:15:26,100 INFO [Listener at localhost/38739] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 13:15:26,100 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:26,100 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:26,101 INFO [Listener at localhost/38739] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 13:15:26,101 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:26,101 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 13:15:26,108 INFO [Listener at localhost/38739] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 13:15:26,111 INFO [Listener at localhost/38739] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37679 2023-07-15 13:15:26,114 INFO [Listener at localhost/38739] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 13:15:26,123 DEBUG [Listener at localhost/38739] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 13:15:26,125 INFO [Listener at localhost/38739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:26,127 INFO [Listener at localhost/38739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:26,129 INFO [Listener at localhost/38739] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37679 connecting to ZooKeeper ensemble=127.0.0.1:54157 2023-07-15 13:15:26,138 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:376790x0, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 13:15:26,140 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37679-0x101691f914d0001 connected 2023-07-15 13:15:26,140 DEBUG [Listener at localhost/38739] zookeeper.ZKUtil(164): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 13:15:26,142 DEBUG [Listener at localhost/38739] zookeeper.ZKUtil(164): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:15:26,143 DEBUG [Listener at localhost/38739] zookeeper.ZKUtil(164): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 13:15:26,144 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37679 2023-07-15 13:15:26,144 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37679 2023-07-15 13:15:26,145 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37679 2023-07-15 13:15:26,146 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37679 2023-07-15 13:15:26,146 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37679 2023-07-15 13:15:26,149 INFO [Listener at localhost/38739] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 13:15:26,149 INFO [Listener at localhost/38739] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 13:15:26,149 INFO [Listener at localhost/38739] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 13:15:26,151 INFO [Listener at localhost/38739] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 13:15:26,151 INFO [Listener at localhost/38739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 13:15:26,152 INFO [Listener at localhost/38739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 13:15:26,152 INFO [Listener at localhost/38739] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 13:15:26,154 INFO [Listener at localhost/38739] http.HttpServer(1146): Jetty bound to port 46479 2023-07-15 13:15:26,155 INFO [Listener at localhost/38739] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 13:15:26,157 INFO [Listener at localhost/38739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:26,158 INFO [Listener at localhost/38739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@40e7421c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/hadoop.log.dir/,AVAILABLE} 2023-07-15 13:15:26,158 INFO [Listener at localhost/38739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:26,158 INFO [Listener at localhost/38739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@21aec1e3{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-15 13:15:26,292 INFO [Listener at localhost/38739] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 13:15:26,293 INFO [Listener at localhost/38739] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 13:15:26,293 INFO [Listener at localhost/38739] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 13:15:26,293 INFO [Listener at localhost/38739] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-15 13:15:26,295 INFO [Listener at localhost/38739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:26,299 INFO [Listener at localhost/38739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@46748067{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/java.io.tmpdir/jetty-0_0_0_0-46479-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2328415879148328550/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:15:26,300 INFO [Listener at localhost/38739] server.AbstractConnector(333): Started ServerConnector@2f894590{HTTP/1.1, (http/1.1)}{0.0.0.0:46479} 2023-07-15 13:15:26,300 INFO [Listener at localhost/38739] server.Server(415): Started @8127ms 2023-07-15 13:15:26,315 INFO [Listener at localhost/38739] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 13:15:26,315 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:26,315 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:26,316 INFO [Listener at localhost/38739] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 13:15:26,316 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:26,316 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 13:15:26,317 INFO [Listener at localhost/38739] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 13:15:26,319 INFO [Listener at localhost/38739] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34837 2023-07-15 13:15:26,319 INFO [Listener at localhost/38739] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 13:15:26,322 DEBUG [Listener at localhost/38739] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 13:15:26,323 INFO [Listener at localhost/38739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:26,324 INFO [Listener at localhost/38739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:26,326 INFO [Listener at localhost/38739] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34837 connecting to ZooKeeper ensemble=127.0.0.1:54157 2023-07-15 13:15:26,330 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:348370x0, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 13:15:26,331 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34837-0x101691f914d0002 connected 2023-07-15 13:15:26,332 DEBUG [Listener at localhost/38739] zookeeper.ZKUtil(164): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 13:15:26,333 DEBUG [Listener at localhost/38739] zookeeper.ZKUtil(164): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:15:26,334 DEBUG [Listener at localhost/38739] zookeeper.ZKUtil(164): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 13:15:26,335 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34837 2023-07-15 13:15:26,335 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34837 2023-07-15 13:15:26,338 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34837 2023-07-15 13:15:26,343 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34837 2023-07-15 13:15:26,343 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34837 2023-07-15 13:15:26,345 INFO [Listener at localhost/38739] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 13:15:26,346 INFO [Listener at localhost/38739] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 13:15:26,346 INFO [Listener at localhost/38739] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 13:15:26,346 INFO [Listener at localhost/38739] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 13:15:26,346 INFO [Listener at localhost/38739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 13:15:26,347 INFO [Listener at localhost/38739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 13:15:26,347 INFO [Listener at localhost/38739] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 13:15:26,347 INFO [Listener at localhost/38739] http.HttpServer(1146): Jetty bound to port 39375 2023-07-15 13:15:26,348 INFO [Listener at localhost/38739] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 13:15:26,355 INFO [Listener at localhost/38739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:26,355 INFO [Listener at localhost/38739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1536303c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/hadoop.log.dir/,AVAILABLE} 2023-07-15 13:15:26,356 INFO [Listener at localhost/38739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:26,356 INFO [Listener at localhost/38739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@f68fdb3{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-15 13:15:26,482 INFO [Listener at localhost/38739] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 13:15:26,484 INFO [Listener at localhost/38739] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 13:15:26,484 INFO [Listener at localhost/38739] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 13:15:26,484 INFO [Listener at localhost/38739] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-15 13:15:26,486 INFO [Listener at localhost/38739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:26,487 INFO [Listener at localhost/38739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@36ab60ce{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/java.io.tmpdir/jetty-0_0_0_0-39375-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4873573072750535297/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:15:26,489 INFO [Listener at localhost/38739] server.AbstractConnector(333): Started ServerConnector@1dc2f9d9{HTTP/1.1, (http/1.1)}{0.0.0.0:39375} 2023-07-15 13:15:26,489 INFO [Listener at localhost/38739] server.Server(415): Started @8316ms 2023-07-15 13:15:26,503 INFO [Listener at localhost/38739] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 13:15:26,504 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:26,504 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:26,504 INFO [Listener at localhost/38739] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 13:15:26,504 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:26,505 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 13:15:26,505 INFO [Listener at localhost/38739] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 13:15:26,507 INFO [Listener at localhost/38739] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38761 2023-07-15 13:15:26,507 INFO [Listener at localhost/38739] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 13:15:26,511 DEBUG [Listener at localhost/38739] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 13:15:26,512 INFO [Listener at localhost/38739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:26,514 INFO [Listener at localhost/38739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:26,516 INFO [Listener at localhost/38739] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38761 connecting to ZooKeeper ensemble=127.0.0.1:54157 2023-07-15 13:15:26,520 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:387610x0, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 13:15:26,521 DEBUG [Listener at localhost/38739] zookeeper.ZKUtil(164): regionserver:387610x0, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 13:15:26,522 DEBUG [Listener at localhost/38739] zookeeper.ZKUtil(164): regionserver:387610x0, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:15:26,523 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38761-0x101691f914d0003 connected 2023-07-15 13:15:26,523 DEBUG [Listener at localhost/38739] zookeeper.ZKUtil(164): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 13:15:26,524 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38761 2023-07-15 13:15:26,524 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38761 2023-07-15 13:15:26,525 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38761 2023-07-15 13:15:26,526 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38761 2023-07-15 13:15:26,526 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38761 2023-07-15 13:15:26,529 INFO [Listener at localhost/38739] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 13:15:26,529 INFO [Listener at localhost/38739] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 13:15:26,529 INFO [Listener at localhost/38739] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 13:15:26,530 INFO [Listener at localhost/38739] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 13:15:26,530 INFO [Listener at localhost/38739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 13:15:26,530 INFO [Listener at localhost/38739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 13:15:26,530 INFO [Listener at localhost/38739] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 13:15:26,531 INFO [Listener at localhost/38739] http.HttpServer(1146): Jetty bound to port 34039 2023-07-15 13:15:26,531 INFO [Listener at localhost/38739] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 13:15:26,535 INFO [Listener at localhost/38739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:26,536 INFO [Listener at localhost/38739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@be6058f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/hadoop.log.dir/,AVAILABLE} 2023-07-15 13:15:26,536 INFO [Listener at localhost/38739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:26,537 INFO [Listener at localhost/38739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4a081fb2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-15 13:15:26,665 INFO [Listener at localhost/38739] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 13:15:26,666 INFO [Listener at localhost/38739] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 13:15:26,666 INFO [Listener at localhost/38739] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 13:15:26,667 INFO [Listener at localhost/38739] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-15 13:15:26,668 INFO [Listener at localhost/38739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:26,668 INFO [Listener at localhost/38739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@db55a1c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/java.io.tmpdir/jetty-0_0_0_0-34039-hbase-server-2_4_18-SNAPSHOT_jar-_-any-122088884342095558/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:15:26,670 INFO [Listener at localhost/38739] server.AbstractConnector(333): Started ServerConnector@2286bf8a{HTTP/1.1, (http/1.1)}{0.0.0.0:34039} 2023-07-15 13:15:26,670 INFO [Listener at localhost/38739] server.Server(415): Started @8496ms 2023-07-15 13:15:26,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 13:15:26,680 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@1d76baff{HTTP/1.1, (http/1.1)}{0.0.0.0:35935} 2023-07-15 13:15:26,681 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8507ms 2023-07-15 13:15:26,681 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,40693,1689426924021 2023-07-15 13:15:26,690 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-15 13:15:26,691 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,40693,1689426924021 2023-07-15 13:15:26,711 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 13:15:26,712 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 13:15:26,712 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 13:15:26,711 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 13:15:26,712 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:26,714 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-15 13:15:26,715 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,40693,1689426924021 from backup master directory 2023-07-15 13:15:26,715 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-15 13:15:26,720 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,40693,1689426924021 2023-07-15 13:15:26,720 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-15 13:15:26,721 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 13:15:26,721 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,40693,1689426924021 2023-07-15 13:15:26,725 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-15 13:15:26,726 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-15 13:15:26,859 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/hbase.id with ID: 925ed21b-d5d3-43cb-ada8-6df6a6ac8d5d 2023-07-15 13:15:26,906 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:26,923 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:26,978 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1a068dda to 127.0.0.1:54157 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:15:27,010 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1149030e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:15:27,037 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:27,039 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-15 13:15:27,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-15 13:15:27,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-15 13:15:27,063 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-15 13:15:27,067 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-15 13:15:27,069 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:15:27,113 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/MasterData/data/master/store-tmp 2023-07-15 13:15:27,177 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:27,177 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-15 13:15:27,177 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:15:27,177 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:15:27,177 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-15 13:15:27,177 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:15:27,177 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:15:27,177 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 13:15:27,180 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/MasterData/WALs/jenkins-hbase4.apache.org,40693,1689426924021 2023-07-15 13:15:27,211 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40693%2C1689426924021, suffix=, logDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/MasterData/WALs/jenkins-hbase4.apache.org,40693,1689426924021, archiveDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/MasterData/oldWALs, maxLogs=10 2023-07-15 13:15:27,286 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43995,DS-0f3a48f8-8cb0-454b-850a-8844cb779b84,DISK] 2023-07-15 13:15:27,286 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39307,DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35,DISK] 2023-07-15 13:15:27,286 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33833,DS-834fac10-3713-40cb-b8d3-78d39d80cf56,DISK] 2023-07-15 13:15:27,295 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-15 13:15:27,389 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/MasterData/WALs/jenkins-hbase4.apache.org,40693,1689426924021/jenkins-hbase4.apache.org%2C40693%2C1689426924021.1689426927225 2023-07-15 13:15:27,392 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43995,DS-0f3a48f8-8cb0-454b-850a-8844cb779b84,DISK], DatanodeInfoWithStorage[127.0.0.1:39307,DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35,DISK], DatanodeInfoWithStorage[127.0.0.1:33833,DS-834fac10-3713-40cb-b8d3-78d39d80cf56,DISK]] 2023-07-15 13:15:27,393 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:27,393 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:27,397 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:15:27,399 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:15:27,487 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:15:27,495 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-15 13:15:27,532 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-15 13:15:27,549 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:27,556 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:15:27,559 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:15:27,578 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:15:27,583 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:27,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9669045280, jitterRate=-0.09949998557567596}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:27,584 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 13:15:27,586 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-15 13:15:27,619 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-15 13:15:27,619 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-15 13:15:27,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-15 13:15:27,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-15 13:15:27,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 44 msec 2023-07-15 13:15:27,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-15 13:15:27,703 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-15 13:15:27,709 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-15 13:15:27,718 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-15 13:15:27,725 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-15 13:15:27,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-15 13:15:27,733 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:27,734 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-15 13:15:27,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-15 13:15:27,748 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-15 13:15:27,753 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 13:15:27,753 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 13:15:27,753 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 13:15:27,753 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:27,753 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 13:15:27,753 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,40693,1689426924021, sessionid=0x101691f914d0000, setting cluster-up flag (Was=false) 2023-07-15 13:15:27,772 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:27,779 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-15 13:15:27,781 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40693,1689426924021 2023-07-15 13:15:27,788 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:27,795 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-15 13:15:27,796 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40693,1689426924021 2023-07-15 13:15:27,799 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.hbase-snapshot/.tmp 2023-07-15 13:15:27,875 INFO [RS:0;jenkins-hbase4:37679] regionserver.HRegionServer(951): ClusterId : 925ed21b-d5d3-43cb-ada8-6df6a6ac8d5d 2023-07-15 13:15:27,885 INFO [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(951): ClusterId : 925ed21b-d5d3-43cb-ada8-6df6a6ac8d5d 2023-07-15 13:15:27,890 INFO [RS:1;jenkins-hbase4:34837] regionserver.HRegionServer(951): ClusterId : 925ed21b-d5d3-43cb-ada8-6df6a6ac8d5d 2023-07-15 13:15:27,892 DEBUG [RS:0;jenkins-hbase4:37679] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 13:15:27,892 DEBUG [RS:2;jenkins-hbase4:38761] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 13:15:27,892 DEBUG [RS:1;jenkins-hbase4:34837] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 13:15:27,900 DEBUG [RS:0;jenkins-hbase4:37679] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 13:15:27,900 DEBUG [RS:1;jenkins-hbase4:34837] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 13:15:27,900 DEBUG [RS:2;jenkins-hbase4:38761] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 13:15:27,900 DEBUG [RS:1;jenkins-hbase4:34837] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 13:15:27,900 DEBUG [RS:0;jenkins-hbase4:37679] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 13:15:27,900 DEBUG [RS:2;jenkins-hbase4:38761] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 13:15:27,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-15 13:15:27,908 DEBUG [RS:2;jenkins-hbase4:38761] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 13:15:27,908 DEBUG [RS:1;jenkins-hbase4:34837] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 13:15:27,908 DEBUG [RS:0;jenkins-hbase4:37679] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 13:15:27,915 DEBUG [RS:1;jenkins-hbase4:34837] zookeeper.ReadOnlyZKClient(139): Connect 0x48f96027 to 127.0.0.1:54157 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:15:27,915 DEBUG [RS:2;jenkins-hbase4:38761] zookeeper.ReadOnlyZKClient(139): Connect 0x00bf6c8b to 127.0.0.1:54157 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:15:27,916 DEBUG [RS:0;jenkins-hbase4:37679] zookeeper.ReadOnlyZKClient(139): Connect 0x3d03a0a2 to 127.0.0.1:54157 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:15:27,926 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-15 13:15:27,929 DEBUG [RS:1;jenkins-hbase4:34837] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@67163754, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:15:27,929 DEBUG [RS:1;jenkins-hbase4:34837] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@308f0d12, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 13:15:27,931 DEBUG [RS:0;jenkins-hbase4:37679] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1800cfb5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:15:27,931 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40693,1689426924021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 13:15:27,931 DEBUG [RS:2;jenkins-hbase4:38761] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@e7882ec, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:15:27,931 DEBUG [RS:0;jenkins-hbase4:37679] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@41e6a1b7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 13:15:27,932 DEBUG [RS:2;jenkins-hbase4:38761] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@a4707fb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 13:15:27,934 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-15 13:15:27,935 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-15 13:15:27,964 DEBUG [RS:0;jenkins-hbase4:37679] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:37679 2023-07-15 13:15:27,965 DEBUG [RS:1;jenkins-hbase4:34837] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:34837 2023-07-15 13:15:27,968 DEBUG [RS:2;jenkins-hbase4:38761] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:38761 2023-07-15 13:15:27,972 INFO [RS:0;jenkins-hbase4:37679] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 13:15:27,975 INFO [RS:2;jenkins-hbase4:38761] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 13:15:27,975 INFO [RS:2;jenkins-hbase4:38761] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 13:15:27,975 DEBUG [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 13:15:27,972 INFO [RS:1;jenkins-hbase4:34837] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 13:15:27,975 INFO [RS:1;jenkins-hbase4:34837] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 13:15:27,975 DEBUG [RS:1;jenkins-hbase4:34837] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 13:15:27,975 INFO [RS:0;jenkins-hbase4:37679] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 13:15:27,976 DEBUG [RS:0;jenkins-hbase4:37679] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 13:15:27,983 INFO [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40693,1689426924021 with isa=jenkins-hbase4.apache.org/172.31.14.131:38761, startcode=1689426926503 2023-07-15 13:15:27,984 INFO [RS:0;jenkins-hbase4:37679] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40693,1689426924021 with isa=jenkins-hbase4.apache.org/172.31.14.131:37679, startcode=1689426926099 2023-07-15 13:15:27,987 INFO [RS:1;jenkins-hbase4:34837] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40693,1689426924021 with isa=jenkins-hbase4.apache.org/172.31.14.131:34837, startcode=1689426926314 2023-07-15 13:15:28,009 DEBUG [RS:2;jenkins-hbase4:38761] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 13:15:28,009 DEBUG [RS:1;jenkins-hbase4:34837] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 13:15:28,009 DEBUG [RS:0;jenkins-hbase4:37679] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 13:15:28,075 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-15 13:15:28,129 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34757, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 13:15:28,129 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46449, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 13:15:28,129 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42555, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 13:15:28,145 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:28,159 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:28,161 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:28,184 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-15 13:15:28,192 DEBUG [RS:0;jenkins-hbase4:37679] regionserver.HRegionServer(2830): Master is not running yet 2023-07-15 13:15:28,192 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-15 13:15:28,192 WARN [RS:0;jenkins-hbase4:37679] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-15 13:15:28,192 DEBUG [RS:1;jenkins-hbase4:34837] regionserver.HRegionServer(2830): Master is not running yet 2023-07-15 13:15:28,192 DEBUG [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(2830): Master is not running yet 2023-07-15 13:15:28,193 WARN [RS:1;jenkins-hbase4:34837] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-15 13:15:28,193 WARN [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-15 13:15:28,194 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-15 13:15:28,194 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-15 13:15:28,196 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 13:15:28,196 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 13:15:28,196 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 13:15:28,196 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 13:15:28,196 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-15 13:15:28,196 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,197 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 13:15:28,197 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,203 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689426958203 2023-07-15 13:15:28,207 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-15 13:15:28,210 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-15 13:15:28,211 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-15 13:15:28,211 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-15 13:15:28,214 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:28,221 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-15 13:15:28,221 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-15 13:15:28,222 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-15 13:15:28,222 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-15 13:15:28,223 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,224 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-15 13:15:28,227 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-15 13:15:28,227 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-15 13:15:28,231 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-15 13:15:28,231 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-15 13:15:28,234 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689426928233,5,FailOnTimeoutGroup] 2023-07-15 13:15:28,235 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689426928235,5,FailOnTimeoutGroup] 2023-07-15 13:15:28,235 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,235 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-15 13:15:28,237 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,237 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,296 INFO [RS:0;jenkins-hbase4:37679] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40693,1689426924021 with isa=jenkins-hbase4.apache.org/172.31.14.131:37679, startcode=1689426926099 2023-07-15 13:15:28,296 INFO [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40693,1689426924021 with isa=jenkins-hbase4.apache.org/172.31.14.131:38761, startcode=1689426926503 2023-07-15 13:15:28,296 INFO [RS:1;jenkins-hbase4:34837] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40693,1689426924021 with isa=jenkins-hbase4.apache.org/172.31.14.131:34837, startcode=1689426926314 2023-07-15 13:15:28,306 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40693] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:28,308 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40693,1689426924021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 13:15:28,308 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40693,1689426924021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-15 13:15:28,309 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:28,311 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:28,311 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe 2023-07-15 13:15:28,314 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40693] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:28,314 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40693,1689426924021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 13:15:28,315 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40693,1689426924021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-15 13:15:28,315 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40693] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:28,316 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40693,1689426924021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 13:15:28,316 DEBUG [RS:0;jenkins-hbase4:37679] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe 2023-07-15 13:15:28,316 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40693,1689426924021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-15 13:15:28,316 DEBUG [RS:0;jenkins-hbase4:37679] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42517 2023-07-15 13:15:28,316 DEBUG [RS:0;jenkins-hbase4:37679] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45689 2023-07-15 13:15:28,317 DEBUG [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe 2023-07-15 13:15:28,317 DEBUG [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42517 2023-07-15 13:15:28,317 DEBUG [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45689 2023-07-15 13:15:28,319 DEBUG [RS:1;jenkins-hbase4:34837] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe 2023-07-15 13:15:28,319 DEBUG [RS:1;jenkins-hbase4:34837] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42517 2023-07-15 13:15:28,319 DEBUG [RS:1;jenkins-hbase4:34837] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45689 2023-07-15 13:15:28,325 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:28,330 DEBUG [RS:1;jenkins-hbase4:34837] zookeeper.ZKUtil(162): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:28,330 WARN [RS:1;jenkins-hbase4:34837] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 13:15:28,330 DEBUG [RS:2;jenkins-hbase4:38761] zookeeper.ZKUtil(162): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:28,331 DEBUG [RS:0;jenkins-hbase4:37679] zookeeper.ZKUtil(162): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:28,330 INFO [RS:1;jenkins-hbase4:34837] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:15:28,334 DEBUG [RS:1;jenkins-hbase4:34837] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:28,331 WARN [RS:2;jenkins-hbase4:38761] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 13:15:28,331 WARN [RS:0;jenkins-hbase4:37679] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 13:15:28,335 INFO [RS:2;jenkins-hbase4:38761] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:15:28,337 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34837,1689426926314] 2023-07-15 13:15:28,345 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38761,1689426926503] 2023-07-15 13:15:28,335 INFO [RS:0;jenkins-hbase4:37679] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:15:28,345 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37679,1689426926099] 2023-07-15 13:15:28,345 DEBUG [RS:0;jenkins-hbase4:37679] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:28,350 DEBUG [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:28,377 DEBUG [RS:1;jenkins-hbase4:34837] zookeeper.ZKUtil(162): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:28,379 DEBUG [RS:1;jenkins-hbase4:34837] zookeeper.ZKUtil(162): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:28,379 DEBUG [RS:2;jenkins-hbase4:38761] zookeeper.ZKUtil(162): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:28,380 DEBUG [RS:0;jenkins-hbase4:37679] zookeeper.ZKUtil(162): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:28,381 DEBUG [RS:1;jenkins-hbase4:34837] zookeeper.ZKUtil(162): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:28,382 DEBUG [RS:0;jenkins-hbase4:37679] zookeeper.ZKUtil(162): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:28,382 DEBUG [RS:2;jenkins-hbase4:38761] zookeeper.ZKUtil(162): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:28,383 DEBUG [RS:0;jenkins-hbase4:37679] zookeeper.ZKUtil(162): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:28,384 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:28,385 DEBUG [RS:2;jenkins-hbase4:38761] zookeeper.ZKUtil(162): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:28,391 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-15 13:15:28,395 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/info 2023-07-15 13:15:28,399 DEBUG [RS:1;jenkins-hbase4:34837] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 13:15:28,399 DEBUG [RS:0;jenkins-hbase4:37679] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 13:15:28,399 DEBUG [RS:2;jenkins-hbase4:38761] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 13:15:28,400 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-15 13:15:28,401 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:28,401 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-15 13:15:28,409 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/rep_barrier 2023-07-15 13:15:28,409 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-15 13:15:28,410 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:28,410 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-15 13:15:28,413 INFO [RS:2;jenkins-hbase4:38761] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 13:15:28,413 INFO [RS:1;jenkins-hbase4:34837] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 13:15:28,429 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/table 2023-07-15 13:15:28,430 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-15 13:15:28,432 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:28,433 INFO [RS:0;jenkins-hbase4:37679] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 13:15:28,437 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740 2023-07-15 13:15:28,438 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740 2023-07-15 13:15:28,453 INFO [RS:0;jenkins-hbase4:37679] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 13:15:28,453 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-15 13:15:28,468 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-15 13:15:28,471 INFO [RS:1;jenkins-hbase4:34837] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 13:15:28,477 INFO [RS:2;jenkins-hbase4:38761] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 13:15:28,480 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:28,481 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9923917120, jitterRate=-0.07576319575309753}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-15 13:15:28,481 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-15 13:15:28,481 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-15 13:15:28,481 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-15 13:15:28,482 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-15 13:15:28,482 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-15 13:15:28,482 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-15 13:15:28,483 INFO [RS:1;jenkins-hbase4:34837] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 13:15:28,483 INFO [RS:1;jenkins-hbase4:34837] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,486 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-15 13:15:28,487 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-15 13:15:28,487 INFO [RS:1;jenkins-hbase4:34837] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 13:15:28,483 INFO [RS:2;jenkins-hbase4:38761] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 13:15:28,483 INFO [RS:0;jenkins-hbase4:37679] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 13:15:28,491 INFO [RS:2;jenkins-hbase4:38761] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,491 INFO [RS:0;jenkins-hbase4:37679] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,491 INFO [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 13:15:28,491 INFO [RS:0;jenkins-hbase4:37679] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 13:15:28,501 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-15 13:15:28,501 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-15 13:15:28,501 INFO [RS:0;jenkins-hbase4:37679] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,501 INFO [RS:2;jenkins-hbase4:38761] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,502 DEBUG [RS:0;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,501 INFO [RS:1;jenkins-hbase4:34837] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,502 DEBUG [RS:0;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,502 DEBUG [RS:2;jenkins-hbase4:38761] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,503 DEBUG [RS:0;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,503 DEBUG [RS:2;jenkins-hbase4:38761] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,503 DEBUG [RS:0;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,503 DEBUG [RS:2;jenkins-hbase4:38761] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,503 DEBUG [RS:0;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,503 DEBUG [RS:2;jenkins-hbase4:38761] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,503 DEBUG [RS:0;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 13:15:28,503 DEBUG [RS:2;jenkins-hbase4:38761] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,504 DEBUG [RS:0;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,504 DEBUG [RS:2;jenkins-hbase4:38761] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 13:15:28,504 DEBUG [RS:0;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,504 DEBUG [RS:2;jenkins-hbase4:38761] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,504 DEBUG [RS:0;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,504 DEBUG [RS:2;jenkins-hbase4:38761] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,504 DEBUG [RS:2;jenkins-hbase4:38761] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,503 DEBUG [RS:1;jenkins-hbase4:34837] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,504 DEBUG [RS:2;jenkins-hbase4:38761] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,504 DEBUG [RS:1;jenkins-hbase4:34837] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,504 DEBUG [RS:0;jenkins-hbase4:37679] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,505 DEBUG [RS:1;jenkins-hbase4:34837] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,505 DEBUG [RS:1;jenkins-hbase4:34837] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,505 DEBUG [RS:1;jenkins-hbase4:34837] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,505 DEBUG [RS:1;jenkins-hbase4:34837] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 13:15:28,505 DEBUG [RS:1;jenkins-hbase4:34837] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,505 DEBUG [RS:1;jenkins-hbase4:34837] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,506 DEBUG [RS:1;jenkins-hbase4:34837] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,506 DEBUG [RS:1;jenkins-hbase4:34837] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:28,515 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-15 13:15:28,517 INFO [RS:0;jenkins-hbase4:37679] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,517 INFO [RS:0;jenkins-hbase4:37679] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,517 INFO [RS:0;jenkins-hbase4:37679] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,528 INFO [RS:2;jenkins-hbase4:38761] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,528 INFO [RS:2;jenkins-hbase4:38761] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,528 INFO [RS:2;jenkins-hbase4:38761] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,530 INFO [RS:1;jenkins-hbase4:34837] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,530 INFO [RS:1;jenkins-hbase4:34837] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,530 INFO [RS:1;jenkins-hbase4:34837] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,539 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-15 13:15:28,544 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-15 13:15:28,553 INFO [RS:0;jenkins-hbase4:37679] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 13:15:28,557 INFO [RS:1;jenkins-hbase4:34837] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 13:15:28,558 INFO [RS:0;jenkins-hbase4:37679] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37679,1689426926099-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,558 INFO [RS:1;jenkins-hbase4:34837] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34837,1689426926314-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,559 INFO [RS:2;jenkins-hbase4:38761] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 13:15:28,559 INFO [RS:2;jenkins-hbase4:38761] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38761,1689426926503-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:28,578 INFO [RS:0;jenkins-hbase4:37679] regionserver.Replication(203): jenkins-hbase4.apache.org,37679,1689426926099 started 2023-07-15 13:15:28,578 INFO [RS:0;jenkins-hbase4:37679] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37679,1689426926099, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37679, sessionid=0x101691f914d0001 2023-07-15 13:15:28,578 INFO [RS:2;jenkins-hbase4:38761] regionserver.Replication(203): jenkins-hbase4.apache.org,38761,1689426926503 started 2023-07-15 13:15:28,578 INFO [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38761,1689426926503, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38761, sessionid=0x101691f914d0003 2023-07-15 13:15:28,578 DEBUG [RS:0;jenkins-hbase4:37679] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 13:15:28,578 DEBUG [RS:0;jenkins-hbase4:37679] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:28,579 DEBUG [RS:0;jenkins-hbase4:37679] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37679,1689426926099' 2023-07-15 13:15:28,579 DEBUG [RS:0;jenkins-hbase4:37679] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 13:15:28,579 DEBUG [RS:0;jenkins-hbase4:37679] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 13:15:28,580 DEBUG [RS:0;jenkins-hbase4:37679] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 13:15:28,580 DEBUG [RS:0;jenkins-hbase4:37679] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 13:15:28,580 DEBUG [RS:0;jenkins-hbase4:37679] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:28,580 DEBUG [RS:0;jenkins-hbase4:37679] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37679,1689426926099' 2023-07-15 13:15:28,580 DEBUG [RS:0;jenkins-hbase4:37679] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 13:15:28,581 DEBUG [RS:0;jenkins-hbase4:37679] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 13:15:28,581 DEBUG [RS:0;jenkins-hbase4:37679] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 13:15:28,581 INFO [RS:0;jenkins-hbase4:37679] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-15 13:15:28,581 INFO [RS:0;jenkins-hbase4:37679] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-15 13:15:28,582 INFO [RS:1;jenkins-hbase4:34837] regionserver.Replication(203): jenkins-hbase4.apache.org,34837,1689426926314 started 2023-07-15 13:15:28,582 DEBUG [RS:2;jenkins-hbase4:38761] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 13:15:28,583 INFO [RS:1;jenkins-hbase4:34837] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34837,1689426926314, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34837, sessionid=0x101691f914d0002 2023-07-15 13:15:28,583 DEBUG [RS:2;jenkins-hbase4:38761] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:28,583 DEBUG [RS:1;jenkins-hbase4:34837] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 13:15:28,584 DEBUG [RS:1;jenkins-hbase4:34837] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:28,584 DEBUG [RS:1;jenkins-hbase4:34837] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34837,1689426926314' 2023-07-15 13:15:28,584 DEBUG [RS:1;jenkins-hbase4:34837] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 13:15:28,583 DEBUG [RS:2;jenkins-hbase4:38761] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38761,1689426926503' 2023-07-15 13:15:28,584 DEBUG [RS:2;jenkins-hbase4:38761] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 13:15:28,584 DEBUG [RS:1;jenkins-hbase4:34837] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 13:15:28,584 DEBUG [RS:2;jenkins-hbase4:38761] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 13:15:28,585 DEBUG [RS:1;jenkins-hbase4:34837] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 13:15:28,585 DEBUG [RS:1;jenkins-hbase4:34837] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 13:15:28,585 DEBUG [RS:1;jenkins-hbase4:34837] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:28,585 DEBUG [RS:2;jenkins-hbase4:38761] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 13:15:28,585 DEBUG [RS:1;jenkins-hbase4:34837] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34837,1689426926314' 2023-07-15 13:15:28,586 DEBUG [RS:1;jenkins-hbase4:34837] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 13:15:28,585 DEBUG [RS:2;jenkins-hbase4:38761] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 13:15:28,586 DEBUG [RS:2;jenkins-hbase4:38761] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:28,586 DEBUG [RS:2;jenkins-hbase4:38761] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38761,1689426926503' 2023-07-15 13:15:28,586 DEBUG [RS:2;jenkins-hbase4:38761] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 13:15:28,586 DEBUG [RS:1;jenkins-hbase4:34837] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 13:15:28,586 DEBUG [RS:2;jenkins-hbase4:38761] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 13:15:28,587 DEBUG [RS:2;jenkins-hbase4:38761] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 13:15:28,587 INFO [RS:2;jenkins-hbase4:38761] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-15 13:15:28,587 DEBUG [RS:1;jenkins-hbase4:34837] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 13:15:28,587 INFO [RS:2;jenkins-hbase4:38761] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-15 13:15:28,588 INFO [RS:1;jenkins-hbase4:34837] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-15 13:15:28,588 INFO [RS:1;jenkins-hbase4:34837] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-15 13:15:28,696 DEBUG [jenkins-hbase4:40693] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-15 13:15:28,696 INFO [RS:1;jenkins-hbase4:34837] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34837%2C1689426926314, suffix=, logDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,34837,1689426926314, archiveDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/oldWALs, maxLogs=32 2023-07-15 13:15:28,700 INFO [RS:0;jenkins-hbase4:37679] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37679%2C1689426926099, suffix=, logDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,37679,1689426926099, archiveDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/oldWALs, maxLogs=32 2023-07-15 13:15:28,723 DEBUG [jenkins-hbase4:40693] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:28,724 DEBUG [jenkins-hbase4:40693] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:28,725 DEBUG [jenkins-hbase4:40693] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:28,725 DEBUG [jenkins-hbase4:40693] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:28,725 DEBUG [jenkins-hbase4:40693] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:28,726 INFO [RS:2;jenkins-hbase4:38761] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38761%2C1689426926503, suffix=, logDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,38761,1689426926503, archiveDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/oldWALs, maxLogs=32 2023-07-15 13:15:28,730 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34837,1689426926314, state=OPENING 2023-07-15 13:15:28,752 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33833,DS-834fac10-3713-40cb-b8d3-78d39d80cf56,DISK] 2023-07-15 13:15:28,756 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-15 13:15:28,758 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:28,769 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 13:15:28,770 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:28,772 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39307,DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35,DISK] 2023-07-15 13:15:28,772 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43995,DS-0f3a48f8-8cb0-454b-850a-8844cb779b84,DISK] 2023-07-15 13:15:28,772 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43995,DS-0f3a48f8-8cb0-454b-850a-8844cb779b84,DISK] 2023-07-15 13:15:28,772 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39307,DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35,DISK] 2023-07-15 13:15:28,773 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33833,DS-834fac10-3713-40cb-b8d3-78d39d80cf56,DISK] 2023-07-15 13:15:28,811 WARN [IPC Server handler 0 on default port 42517] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-07-15 13:15:28,812 WARN [IPC Server handler 0 on default port 42517] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-07-15 13:15:28,812 WARN [IPC Server handler 0 on default port 42517] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-07-15 13:15:28,818 WARN [ReadOnlyZKClient-127.0.0.1:54157@0x1a068dda] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-15 13:15:28,863 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39307,DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35,DISK] 2023-07-15 13:15:28,867 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43995,DS-0f3a48f8-8cb0-454b-850a-8844cb779b84,DISK] 2023-07-15 13:15:28,868 INFO [RS:1;jenkins-hbase4:34837] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,34837,1689426926314/jenkins-hbase4.apache.org%2C34837%2C1689426926314.1689426928700 2023-07-15 13:15:28,868 INFO [RS:0;jenkins-hbase4:37679] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,37679,1689426926099/jenkins-hbase4.apache.org%2C37679%2C1689426926099.1689426928709 2023-07-15 13:15:28,870 DEBUG [RS:1;jenkins-hbase4:34837] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43995,DS-0f3a48f8-8cb0-454b-850a-8844cb779b84,DISK], DatanodeInfoWithStorage[127.0.0.1:39307,DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35,DISK], DatanodeInfoWithStorage[127.0.0.1:33833,DS-834fac10-3713-40cb-b8d3-78d39d80cf56,DISK]] 2023-07-15 13:15:28,876 DEBUG [RS:0;jenkins-hbase4:37679] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33833,DS-834fac10-3713-40cb-b8d3-78d39d80cf56,DISK], DatanodeInfoWithStorage[127.0.0.1:43995,DS-0f3a48f8-8cb0-454b-850a-8844cb779b84,DISK], DatanodeInfoWithStorage[127.0.0.1:39307,DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35,DISK]] 2023-07-15 13:15:28,876 INFO [RS:2;jenkins-hbase4:38761] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,38761,1689426926503/jenkins-hbase4.apache.org%2C38761%2C1689426926503.1689426928728 2023-07-15 13:15:28,879 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40693,1689426924021] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:15:28,879 DEBUG [RS:2;jenkins-hbase4:38761] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39307,DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35,DISK], DatanodeInfoWithStorage[127.0.0.1:43995,DS-0f3a48f8-8cb0-454b-850a-8844cb779b84,DISK]] 2023-07-15 13:15:28,888 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41210, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:15:28,889 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34837] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:41210 deadline: 1689426988888, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:29,051 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:29,055 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 13:15:29,059 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41216, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 13:15:29,072 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-15 13:15:29,073 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:15:29,076 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34837%2C1689426926314.meta, suffix=.meta, logDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,34837,1689426926314, archiveDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/oldWALs, maxLogs=32 2023-07-15 13:15:29,097 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43995,DS-0f3a48f8-8cb0-454b-850a-8844cb779b84,DISK] 2023-07-15 13:15:29,097 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33833,DS-834fac10-3713-40cb-b8d3-78d39d80cf56,DISK] 2023-07-15 13:15:29,097 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39307,DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35,DISK] 2023-07-15 13:15:29,103 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,34837,1689426926314/jenkins-hbase4.apache.org%2C34837%2C1689426926314.meta.1689426929078.meta 2023-07-15 13:15:29,103 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43995,DS-0f3a48f8-8cb0-454b-850a-8844cb779b84,DISK], DatanodeInfoWithStorage[127.0.0.1:33833,DS-834fac10-3713-40cb-b8d3-78d39d80cf56,DISK], DatanodeInfoWithStorage[127.0.0.1:39307,DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35,DISK]] 2023-07-15 13:15:29,103 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:29,105 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-15 13:15:29,108 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-15 13:15:29,110 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-15 13:15:29,117 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-15 13:15:29,117 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:29,117 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-15 13:15:29,117 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-15 13:15:29,120 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-15 13:15:29,122 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/info 2023-07-15 13:15:29,122 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/info 2023-07-15 13:15:29,122 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-15 13:15:29,123 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:29,123 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-15 13:15:29,124 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/rep_barrier 2023-07-15 13:15:29,124 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/rep_barrier 2023-07-15 13:15:29,125 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-15 13:15:29,126 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:29,126 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-15 13:15:29,127 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/table 2023-07-15 13:15:29,127 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/table 2023-07-15 13:15:29,128 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-15 13:15:29,128 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:29,130 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740 2023-07-15 13:15:29,133 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740 2023-07-15 13:15:29,136 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-15 13:15:29,138 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-15 13:15:29,140 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11621816480, jitterRate=0.08236600458621979}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-15 13:15:29,140 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-15 13:15:29,151 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689426929047 2023-07-15 13:15:29,175 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-15 13:15:29,177 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-15 13:15:29,177 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34837,1689426926314, state=OPEN 2023-07-15 13:15:29,180 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-15 13:15:29,180 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 13:15:29,187 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-15 13:15:29,187 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34837,1689426926314 in 410 msec 2023-07-15 13:15:29,193 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-15 13:15:29,193 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 674 msec 2023-07-15 13:15:29,201 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.2540 sec 2023-07-15 13:15:29,202 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689426929201, completionTime=-1 2023-07-15 13:15:29,202 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-15 13:15:29,202 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-15 13:15:29,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-15 13:15:29,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689426989275 2023-07-15 13:15:29,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689427049275 2023-07-15 13:15:29,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 73 msec 2023-07-15 13:15:29,290 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40693,1689426924021-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:29,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40693,1689426924021-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:29,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40693,1689426924021-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:29,293 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:40693, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:29,293 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:29,300 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-15 13:15:29,309 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-15 13:15:29,310 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:29,321 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-15 13:15:29,324 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:15:29,326 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 13:15:29,342 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:29,345 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b empty. 2023-07-15 13:15:29,345 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:29,346 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-15 13:15:29,378 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:29,380 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0ee65cdb74bf12e0dc6b097a112f439b, NAME => 'hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:29,399 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:29,399 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 0ee65cdb74bf12e0dc6b097a112f439b, disabling compactions & flushes 2023-07-15 13:15:29,399 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:29,399 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:29,399 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. after waiting 0 ms 2023-07-15 13:15:29,399 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:29,399 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:29,399 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 0ee65cdb74bf12e0dc6b097a112f439b: 2023-07-15 13:15:29,403 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 13:15:29,411 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40693,1689426924021] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:29,413 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40693,1689426924021] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-15 13:15:29,416 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:15:29,418 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 13:15:29,421 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:29,422 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc empty. 2023-07-15 13:15:29,423 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:29,423 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-15 13:15:29,425 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689426929406"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426929406"}]},"ts":"1689426929406"} 2023-07-15 13:15:29,449 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:29,451 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => e42fabbf20609a275edbe64c71867bfc, NAME => 'hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:29,473 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 13:15:29,477 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 13:15:29,479 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:29,480 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing e42fabbf20609a275edbe64c71867bfc, disabling compactions & flushes 2023-07-15 13:15:29,480 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:29,480 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:29,480 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. after waiting 0 ms 2023-07-15 13:15:29,480 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:29,480 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:29,480 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for e42fabbf20609a275edbe64c71867bfc: 2023-07-15 13:15:29,483 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426929477"}]},"ts":"1689426929477"} 2023-07-15 13:15:29,484 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 13:15:29,486 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689426929486"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426929486"}]},"ts":"1689426929486"} 2023-07-15 13:15:29,488 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-15 13:15:29,490 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 13:15:29,492 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 13:15:29,492 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426929492"}]},"ts":"1689426929492"} 2023-07-15 13:15:29,494 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:29,494 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:29,494 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:29,494 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:29,495 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:29,496 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-15 13:15:29,497 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0ee65cdb74bf12e0dc6b097a112f439b, ASSIGN}] 2023-07-15 13:15:29,499 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:29,500 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0ee65cdb74bf12e0dc6b097a112f439b, ASSIGN 2023-07-15 13:15:29,500 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:29,500 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:29,500 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:29,500 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:29,500 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e42fabbf20609a275edbe64c71867bfc, ASSIGN}] 2023-07-15 13:15:29,501 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=0ee65cdb74bf12e0dc6b097a112f439b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34837,1689426926314; forceNewPlan=false, retain=false 2023-07-15 13:15:29,503 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e42fabbf20609a275edbe64c71867bfc, ASSIGN 2023-07-15 13:15:29,505 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=e42fabbf20609a275edbe64c71867bfc, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34837,1689426926314; forceNewPlan=false, retain=false 2023-07-15 13:15:29,505 INFO [jenkins-hbase4:40693] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-15 13:15:29,507 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=e42fabbf20609a275edbe64c71867bfc, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:29,507 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=0ee65cdb74bf12e0dc6b097a112f439b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:29,507 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689426929507"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426929507"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426929507"}]},"ts":"1689426929507"} 2023-07-15 13:15:29,507 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689426929507"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426929507"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426929507"}]},"ts":"1689426929507"} 2023-07-15 13:15:29,511 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure e42fabbf20609a275edbe64c71867bfc, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:29,513 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=6, state=RUNNABLE; OpenRegionProcedure 0ee65cdb74bf12e0dc6b097a112f439b, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:29,672 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:29,672 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0ee65cdb74bf12e0dc6b097a112f439b, NAME => 'hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:29,673 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:29,673 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:29,673 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:29,673 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:29,675 INFO [StoreOpener-0ee65cdb74bf12e0dc6b097a112f439b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:29,679 DEBUG [StoreOpener-0ee65cdb74bf12e0dc6b097a112f439b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b/info 2023-07-15 13:15:29,680 DEBUG [StoreOpener-0ee65cdb74bf12e0dc6b097a112f439b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b/info 2023-07-15 13:15:29,680 INFO [StoreOpener-0ee65cdb74bf12e0dc6b097a112f439b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0ee65cdb74bf12e0dc6b097a112f439b columnFamilyName info 2023-07-15 13:15:29,681 INFO [StoreOpener-0ee65cdb74bf12e0dc6b097a112f439b-1] regionserver.HStore(310): Store=0ee65cdb74bf12e0dc6b097a112f439b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:29,682 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:29,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:29,687 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:29,690 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:29,691 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0ee65cdb74bf12e0dc6b097a112f439b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9422828160, jitterRate=-0.12243074178695679}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:29,691 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0ee65cdb74bf12e0dc6b097a112f439b: 2023-07-15 13:15:29,693 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b., pid=9, masterSystemTime=1689426929664 2023-07-15 13:15:29,696 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:29,696 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:29,696 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:29,696 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e42fabbf20609a275edbe64c71867bfc, NAME => 'hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:29,697 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-15 13:15:29,697 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. service=MultiRowMutationService 2023-07-15 13:15:29,698 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-15 13:15:29,698 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:29,698 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:29,698 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:29,698 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:29,698 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=0ee65cdb74bf12e0dc6b097a112f439b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:29,699 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689426929698"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426929698"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426929698"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426929698"}]},"ts":"1689426929698"} 2023-07-15 13:15:29,700 INFO [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:29,702 DEBUG [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/m 2023-07-15 13:15:29,703 DEBUG [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/m 2023-07-15 13:15:29,703 INFO [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e42fabbf20609a275edbe64c71867bfc columnFamilyName m 2023-07-15 13:15:29,704 INFO [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] regionserver.HStore(310): Store=e42fabbf20609a275edbe64c71867bfc/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:29,705 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:29,708 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:29,708 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=6 2023-07-15 13:15:29,708 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=6, state=SUCCESS; OpenRegionProcedure 0ee65cdb74bf12e0dc6b097a112f439b, server=jenkins-hbase4.apache.org,34837,1689426926314 in 189 msec 2023-07-15 13:15:29,712 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-15 13:15:29,712 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=0ee65cdb74bf12e0dc6b097a112f439b, ASSIGN in 211 msec 2023-07-15 13:15:29,713 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:29,713 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 13:15:29,714 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426929714"}]},"ts":"1689426929714"} 2023-07-15 13:15:29,716 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:29,716 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-15 13:15:29,717 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e42fabbf20609a275edbe64c71867bfc; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@7a9a646c, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:29,717 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e42fabbf20609a275edbe64c71867bfc: 2023-07-15 13:15:29,718 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc., pid=8, masterSystemTime=1689426929664 2023-07-15 13:15:29,720 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 13:15:29,722 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:29,722 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:29,723 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=e42fabbf20609a275edbe64c71867bfc, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:29,724 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689426929723"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426929723"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426929723"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426929723"}]},"ts":"1689426929723"} 2023-07-15 13:15:29,724 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 409 msec 2023-07-15 13:15:29,725 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-15 13:15:29,727 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-15 13:15:29,727 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:29,731 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-15 13:15:29,731 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure e42fabbf20609a275edbe64c71867bfc, server=jenkins-hbase4.apache.org,34837,1689426926314 in 216 msec 2023-07-15 13:15:29,735 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-15 13:15:29,735 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=e42fabbf20609a275edbe64c71867bfc, ASSIGN in 231 msec 2023-07-15 13:15:29,736 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 13:15:29,737 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426929737"}]},"ts":"1689426929737"} 2023-07-15 13:15:29,739 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-15 13:15:29,743 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 13:15:29,751 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 332 msec 2023-07-15 13:15:29,766 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-15 13:15:29,779 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 13:15:29,785 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 26 msec 2023-07-15 13:15:29,789 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-15 13:15:29,800 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 13:15:29,805 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 16 msec 2023-07-15 13:15:29,814 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-15 13:15:29,818 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-15 13:15:29,818 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.097sec 2023-07-15 13:15:29,819 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40693,1689426924021] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-15 13:15:29,819 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40693,1689426924021] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-15 13:15:29,824 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-15 13:15:29,826 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-15 13:15:29,826 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-15 13:15:29,827 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40693,1689426924021-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-15 13:15:29,828 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40693,1689426924021-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-15 13:15:29,836 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-15 13:15:29,890 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:29,890 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40693,1689426924021] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:29,893 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40693,1689426924021] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-15 13:15:29,900 DEBUG [Listener at localhost/38739] zookeeper.ReadOnlyZKClient(139): Connect 0x29b587e1 to 127.0.0.1:54157 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:15:29,900 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40693,1689426924021] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-15 13:15:29,906 DEBUG [Listener at localhost/38739] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@65b017d0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:15:29,923 DEBUG [hconnection-0x4925918a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:15:29,938 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41232, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:15:29,949 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,40693,1689426924021 2023-07-15 13:15:29,950 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:29,959 DEBUG [Listener at localhost/38739] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-15 13:15:29,963 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36536, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-15 13:15:29,976 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-15 13:15:29,976 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:29,977 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-15 13:15:29,983 DEBUG [Listener at localhost/38739] zookeeper.ReadOnlyZKClient(139): Connect 0x41ccb94f to 127.0.0.1:54157 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:15:29,990 DEBUG [Listener at localhost/38739] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1792a4f4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:15:29,990 INFO [Listener at localhost/38739] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:54157 2023-07-15 13:15:29,993 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 13:15:29,993 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101691f914d000a connected 2023-07-15 13:15:30,023 INFO [Listener at localhost/38739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=421, OpenFileDescriptor=663, MaxFileDescriptor=60000, SystemLoadAverage=361, ProcessCount=172, AvailableMemoryMB=3363 2023-07-15 13:15:30,025 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-15 13:15:30,049 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:30,050 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:30,090 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-15 13:15:30,104 INFO [Listener at localhost/38739] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 13:15:30,104 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:30,104 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:30,104 INFO [Listener at localhost/38739] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 13:15:30,104 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:30,104 INFO [Listener at localhost/38739] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 13:15:30,104 INFO [Listener at localhost/38739] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 13:15:30,106 INFO [Listener at localhost/38739] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44807 2023-07-15 13:15:30,107 INFO [Listener at localhost/38739] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 13:15:30,108 DEBUG [Listener at localhost/38739] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 13:15:30,109 INFO [Listener at localhost/38739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:30,112 INFO [Listener at localhost/38739] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:30,114 INFO [Listener at localhost/38739] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44807 connecting to ZooKeeper ensemble=127.0.0.1:54157 2023-07-15 13:15:30,118 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:448070x0, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 13:15:30,119 DEBUG [Listener at localhost/38739] zookeeper.ZKUtil(162): regionserver:448070x0, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-15 13:15:30,119 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44807-0x101691f914d000b connected 2023-07-15 13:15:30,120 DEBUG [Listener at localhost/38739] zookeeper.ZKUtil(162): regionserver:44807-0x101691f914d000b, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-15 13:15:30,121 DEBUG [Listener at localhost/38739] zookeeper.ZKUtil(164): regionserver:44807-0x101691f914d000b, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 13:15:30,123 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44807 2023-07-15 13:15:30,123 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44807 2023-07-15 13:15:30,124 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44807 2023-07-15 13:15:30,125 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44807 2023-07-15 13:15:30,125 DEBUG [Listener at localhost/38739] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44807 2023-07-15 13:15:30,127 INFO [Listener at localhost/38739] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 13:15:30,128 INFO [Listener at localhost/38739] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 13:15:30,128 INFO [Listener at localhost/38739] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 13:15:30,128 INFO [Listener at localhost/38739] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 13:15:30,128 INFO [Listener at localhost/38739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 13:15:30,128 INFO [Listener at localhost/38739] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 13:15:30,129 INFO [Listener at localhost/38739] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 13:15:30,129 INFO [Listener at localhost/38739] http.HttpServer(1146): Jetty bound to port 35475 2023-07-15 13:15:30,129 INFO [Listener at localhost/38739] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 13:15:30,131 INFO [Listener at localhost/38739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:30,132 INFO [Listener at localhost/38739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@38634af7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/hadoop.log.dir/,AVAILABLE} 2023-07-15 13:15:30,132 INFO [Listener at localhost/38739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:30,133 INFO [Listener at localhost/38739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@36e2e513{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-15 13:15:30,264 INFO [Listener at localhost/38739] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 13:15:30,265 INFO [Listener at localhost/38739] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 13:15:30,265 INFO [Listener at localhost/38739] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 13:15:30,265 INFO [Listener at localhost/38739] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-15 13:15:30,267 INFO [Listener at localhost/38739] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:30,268 INFO [Listener at localhost/38739] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@33753ee9{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/java.io.tmpdir/jetty-0_0_0_0-35475-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2955461998918264680/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:15:30,270 INFO [Listener at localhost/38739] server.AbstractConnector(333): Started ServerConnector@2652937{HTTP/1.1, (http/1.1)}{0.0.0.0:35475} 2023-07-15 13:15:30,270 INFO [Listener at localhost/38739] server.Server(415): Started @12097ms 2023-07-15 13:15:30,273 INFO [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(951): ClusterId : 925ed21b-d5d3-43cb-ada8-6df6a6ac8d5d 2023-07-15 13:15:30,273 DEBUG [RS:3;jenkins-hbase4:44807] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 13:15:30,276 DEBUG [RS:3;jenkins-hbase4:44807] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 13:15:30,276 DEBUG [RS:3;jenkins-hbase4:44807] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 13:15:30,280 DEBUG [RS:3;jenkins-hbase4:44807] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 13:15:30,281 DEBUG [RS:3;jenkins-hbase4:44807] zookeeper.ReadOnlyZKClient(139): Connect 0x3bf03c53 to 127.0.0.1:54157 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:15:30,286 DEBUG [RS:3;jenkins-hbase4:44807] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3537c55d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:15:30,286 DEBUG [RS:3;jenkins-hbase4:44807] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31085723, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 13:15:30,296 DEBUG [RS:3;jenkins-hbase4:44807] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:44807 2023-07-15 13:15:30,296 INFO [RS:3;jenkins-hbase4:44807] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 13:15:30,296 INFO [RS:3;jenkins-hbase4:44807] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 13:15:30,296 DEBUG [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 13:15:30,297 INFO [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40693,1689426924021 with isa=jenkins-hbase4.apache.org/172.31.14.131:44807, startcode=1689426930103 2023-07-15 13:15:30,297 DEBUG [RS:3;jenkins-hbase4:44807] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 13:15:30,302 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51681, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 13:15:30,303 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40693] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:30,303 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40693,1689426924021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 13:15:30,304 DEBUG [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe 2023-07-15 13:15:30,304 DEBUG [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42517 2023-07-15 13:15:30,304 DEBUG [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45689 2023-07-15 13:15:30,314 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:30,314 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:30,314 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:30,314 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:30,315 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44807,1689426930103] 2023-07-15 13:15:30,315 DEBUG [RS:3;jenkins-hbase4:44807] zookeeper.ZKUtil(162): regionserver:44807-0x101691f914d000b, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:30,316 WARN [RS:3;jenkins-hbase4:44807] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 13:15:30,316 INFO [RS:3;jenkins-hbase4:44807] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:15:30,316 DEBUG [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:30,316 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:30,316 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:30,316 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:30,317 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40693,1689426924021] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:30,317 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:30,318 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:30,319 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:30,319 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40693,1689426924021] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-15 13:15:30,319 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:30,323 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:30,323 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:30,325 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:30,325 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40693,1689426924021] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-15 13:15:30,326 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:30,326 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:30,336 DEBUG [RS:3;jenkins-hbase4:44807] zookeeper.ZKUtil(162): regionserver:44807-0x101691f914d000b, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:30,336 DEBUG [RS:3;jenkins-hbase4:44807] zookeeper.ZKUtil(162): regionserver:44807-0x101691f914d000b, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:30,337 DEBUG [RS:3;jenkins-hbase4:44807] zookeeper.ZKUtil(162): regionserver:44807-0x101691f914d000b, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:30,337 DEBUG [RS:3;jenkins-hbase4:44807] zookeeper.ZKUtil(162): regionserver:44807-0x101691f914d000b, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:30,339 DEBUG [RS:3;jenkins-hbase4:44807] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 13:15:30,339 INFO [RS:3;jenkins-hbase4:44807] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 13:15:30,341 INFO [RS:3;jenkins-hbase4:44807] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 13:15:30,343 INFO [RS:3;jenkins-hbase4:44807] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 13:15:30,343 INFO [RS:3;jenkins-hbase4:44807] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:30,343 INFO [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 13:15:30,345 INFO [RS:3;jenkins-hbase4:44807] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:30,345 DEBUG [RS:3;jenkins-hbase4:44807] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:30,346 DEBUG [RS:3;jenkins-hbase4:44807] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:30,346 DEBUG [RS:3;jenkins-hbase4:44807] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:30,346 DEBUG [RS:3;jenkins-hbase4:44807] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:30,346 DEBUG [RS:3;jenkins-hbase4:44807] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:30,346 DEBUG [RS:3;jenkins-hbase4:44807] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 13:15:30,346 DEBUG [RS:3;jenkins-hbase4:44807] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:30,346 DEBUG [RS:3;jenkins-hbase4:44807] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:30,346 DEBUG [RS:3;jenkins-hbase4:44807] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:30,346 DEBUG [RS:3;jenkins-hbase4:44807] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:30,350 INFO [RS:3;jenkins-hbase4:44807] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:30,351 INFO [RS:3;jenkins-hbase4:44807] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:30,351 INFO [RS:3;jenkins-hbase4:44807] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:30,363 INFO [RS:3;jenkins-hbase4:44807] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 13:15:30,363 INFO [RS:3;jenkins-hbase4:44807] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44807,1689426930103-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:30,375 INFO [RS:3;jenkins-hbase4:44807] regionserver.Replication(203): jenkins-hbase4.apache.org,44807,1689426930103 started 2023-07-15 13:15:30,375 INFO [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44807,1689426930103, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44807, sessionid=0x101691f914d000b 2023-07-15 13:15:30,375 DEBUG [RS:3;jenkins-hbase4:44807] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 13:15:30,375 DEBUG [RS:3;jenkins-hbase4:44807] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:30,375 DEBUG [RS:3;jenkins-hbase4:44807] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44807,1689426930103' 2023-07-15 13:15:30,375 DEBUG [RS:3;jenkins-hbase4:44807] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 13:15:30,376 DEBUG [RS:3;jenkins-hbase4:44807] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 13:15:30,376 DEBUG [RS:3;jenkins-hbase4:44807] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 13:15:30,376 DEBUG [RS:3;jenkins-hbase4:44807] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 13:15:30,377 DEBUG [RS:3;jenkins-hbase4:44807] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:30,377 DEBUG [RS:3;jenkins-hbase4:44807] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44807,1689426930103' 2023-07-15 13:15:30,377 DEBUG [RS:3;jenkins-hbase4:44807] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 13:15:30,377 DEBUG [RS:3;jenkins-hbase4:44807] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 13:15:30,378 DEBUG [RS:3;jenkins-hbase4:44807] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 13:15:30,378 INFO [RS:3;jenkins-hbase4:44807] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-15 13:15:30,378 INFO [RS:3;jenkins-hbase4:44807] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-15 13:15:30,381 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:15:30,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:30,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:30,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:30,391 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:30,394 DEBUG [hconnection-0x22934466-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:15:30,398 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57670, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:15:30,420 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:30,421 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:30,431 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40693] to rsgroup master 2023-07-15 13:15:30,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:30,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36536 deadline: 1689428130430, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. 2023-07-15 13:15:30,433 WARN [Listener at localhost/38739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:15:30,437 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:30,439 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:30,439 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:30,439 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:44807], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:15:30,446 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:30,446 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:30,448 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:30,449 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:30,450 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:30,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:30,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:30,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:30,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:30,462 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:30,467 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:30,467 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:30,473 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679] to rsgroup Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:30,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:30,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:30,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:30,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:30,483 INFO [RS:3;jenkins-hbase4:44807] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44807%2C1689426930103, suffix=, logDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,44807,1689426930103, archiveDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/oldWALs, maxLogs=32 2023-07-15 13:15:30,485 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(238): Moving server region e42fabbf20609a275edbe64c71867bfc, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:30,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:30,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:30,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:30,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:30,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:30,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=e42fabbf20609a275edbe64c71867bfc, REOPEN/MOVE 2023-07-15 13:15:30,490 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=e42fabbf20609a275edbe64c71867bfc, REOPEN/MOVE 2023-07-15 13:15:30,490 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(238): Moving server region 0ee65cdb74bf12e0dc6b097a112f439b, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:30,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:30,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:30,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:30,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:30,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:30,492 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=e42fabbf20609a275edbe64c71867bfc, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:30,492 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689426930491"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426930491"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426930491"}]},"ts":"1689426930491"} 2023-07-15 13:15:30,495 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure e42fabbf20609a275edbe64c71867bfc, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:30,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=0ee65cdb74bf12e0dc6b097a112f439b, REOPEN/MOVE 2023-07-15 13:15:30,501 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:30,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:30,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:30,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:30,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:30,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:30,507 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=0ee65cdb74bf12e0dc6b097a112f439b, REOPEN/MOVE 2023-07-15 13:15:30,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-15 13:15:30,511 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=0ee65cdb74bf12e0dc6b097a112f439b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:30,511 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 3 region(s) to group default, current retry=0 2023-07-15 13:15:30,512 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689426930511"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426930511"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426930511"}]},"ts":"1689426930511"} 2023-07-15 13:15:30,513 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-15 13:15:30,521 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33833,DS-834fac10-3713-40cb-b8d3-78d39d80cf56,DISK] 2023-07-15 13:15:30,522 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43995,DS-0f3a48f8-8cb0-454b-850a-8844cb779b84,DISK] 2023-07-15 13:15:30,523 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34837,1689426926314, state=CLOSING 2023-07-15 13:15:30,524 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; CloseRegionProcedure 0ee65cdb74bf12e0dc6b097a112f439b, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:30,526 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-15 13:15:30,526 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:30,526 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 13:15:30,531 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=16, ppid=13, state=RUNNABLE; CloseRegionProcedure 0ee65cdb74bf12e0dc6b097a112f439b, server=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:30,531 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39307,DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35,DISK] 2023-07-15 13:15:30,535 INFO [RS:3;jenkins-hbase4:44807] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,44807,1689426930103/jenkins-hbase4.apache.org%2C44807%2C1689426930103.1689426930484 2023-07-15 13:15:30,536 DEBUG [RS:3;jenkins-hbase4:44807] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33833,DS-834fac10-3713-40cb-b8d3-78d39d80cf56,DISK], DatanodeInfoWithStorage[127.0.0.1:43995,DS-0f3a48f8-8cb0-454b-850a-8844cb779b84,DISK], DatanodeInfoWithStorage[127.0.0.1:39307,DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35,DISK]] 2023-07-15 13:15:30,675 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-15 13:15:30,676 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-15 13:15:30,677 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-15 13:15:30,677 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-15 13:15:30,677 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-15 13:15:30,677 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:30,677 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-15 13:15:30,678 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e42fabbf20609a275edbe64c71867bfc, disabling compactions & flushes 2023-07-15 13:15:30,678 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:30,678 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:30,678 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. after waiting 0 ms 2023-07-15 13:15:30,678 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:30,679 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e42fabbf20609a275edbe64c71867bfc 1/1 column families, dataSize=1.38 KB heapSize=2.36 KB 2023-07-15 13:15:30,679 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.21 KB heapSize=6.16 KB 2023-07-15 13:15:30,781 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/.tmp/m/f63cee1a17324fa69b06e1ff700fd92a 2023-07-15 13:15:30,782 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.03 KB at sequenceid=16 (bloomFilter=false), to=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/.tmp/info/0a8b942f47c44e2c834ccca876fd0ae3 2023-07-15 13:15:30,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/.tmp/m/f63cee1a17324fa69b06e1ff700fd92a as hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/m/f63cee1a17324fa69b06e1ff700fd92a 2023-07-15 13:15:30,854 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/m/f63cee1a17324fa69b06e1ff700fd92a, entries=3, sequenceid=9, filesize=5.2 K 2023-07-15 13:15:30,861 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=16 (bloomFilter=false), to=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/.tmp/table/2a54995606bd4505a4407e782cd33ad0 2023-07-15 13:15:30,866 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1414, heapSize ~2.34 KB/2400, currentSize=0 B/0 for e42fabbf20609a275edbe64c71867bfc in 187ms, sequenceid=9, compaction requested=false 2023-07-15 13:15:30,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-15 13:15:30,874 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/.tmp/info/0a8b942f47c44e2c834ccca876fd0ae3 as hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/info/0a8b942f47c44e2c834ccca876fd0ae3 2023-07-15 13:15:30,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-15 13:15:30,885 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 13:15:30,886 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:30,886 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e42fabbf20609a275edbe64c71867bfc: 2023-07-15 13:15:30,886 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e42fabbf20609a275edbe64c71867bfc move to jenkins-hbase4.apache.org,38761,1689426926503 record at close sequenceid=9 2023-07-15 13:15:30,888 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/info/0a8b942f47c44e2c834ccca876fd0ae3, entries=22, sequenceid=16, filesize=7.3 K 2023-07-15 13:15:30,890 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure e42fabbf20609a275edbe64c71867bfc, server=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:30,890 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:30,890 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/.tmp/table/2a54995606bd4505a4407e782cd33ad0 as hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/table/2a54995606bd4505a4407e782cd33ad0 2023-07-15 13:15:30,898 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/table/2a54995606bd4505a4407e782cd33ad0, entries=4, sequenceid=16, filesize=4.8 K 2023-07-15 13:15:30,900 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.21 KB/3290, heapSize ~5.88 KB/6024, currentSize=0 B/0 for 1588230740 in 221ms, sequenceid=16, compaction requested=false 2023-07-15 13:15:30,900 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-15 13:15:30,910 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/recovered.edits/19.seqid, newMaxSeqId=19, maxSeqId=1 2023-07-15 13:15:30,910 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 13:15:30,910 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-15 13:15:30,910 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-15 13:15:30,911 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,38761,1689426926503 record at close sequenceid=16 2023-07-15 13:15:30,913 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-15 13:15:30,913 WARN [PEWorker-4] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-15 13:15:30,917 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-15 13:15:30,917 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34837,1689426926314 in 388 msec 2023-07-15 13:15:30,918 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38761,1689426926503; forceNewPlan=false, retain=false 2023-07-15 13:15:31,068 INFO [jenkins-hbase4:40693] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 13:15:31,068 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38761,1689426926503, state=OPENING 2023-07-15 13:15:31,070 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-15 13:15:31,070 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 13:15:31,070 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=15, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:31,224 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:31,225 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 13:15:31,228 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44964, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 13:15:31,234 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-15 13:15:31,234 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:15:31,236 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38761%2C1689426926503.meta, suffix=.meta, logDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,38761,1689426926503, archiveDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/oldWALs, maxLogs=32 2023-07-15 13:15:31,254 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39307,DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35,DISK] 2023-07-15 13:15:31,256 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43995,DS-0f3a48f8-8cb0-454b-850a-8844cb779b84,DISK] 2023-07-15 13:15:31,258 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33833,DS-834fac10-3713-40cb-b8d3-78d39d80cf56,DISK] 2023-07-15 13:15:31,260 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,38761,1689426926503/jenkins-hbase4.apache.org%2C38761%2C1689426926503.meta.1689426931237.meta 2023-07-15 13:15:31,260 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39307,DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35,DISK], DatanodeInfoWithStorage[127.0.0.1:43995,DS-0f3a48f8-8cb0-454b-850a-8844cb779b84,DISK], DatanodeInfoWithStorage[127.0.0.1:33833,DS-834fac10-3713-40cb-b8d3-78d39d80cf56,DISK]] 2023-07-15 13:15:31,260 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:31,261 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-15 13:15:31,261 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-15 13:15:31,261 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-15 13:15:31,261 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-15 13:15:31,261 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:31,261 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-15 13:15:31,261 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-15 13:15:31,265 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-15 13:15:31,266 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/info 2023-07-15 13:15:31,266 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/info 2023-07-15 13:15:31,267 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-15 13:15:31,277 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/info/0a8b942f47c44e2c834ccca876fd0ae3 2023-07-15 13:15:31,278 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:31,278 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-15 13:15:31,279 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/rep_barrier 2023-07-15 13:15:31,280 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/rep_barrier 2023-07-15 13:15:31,280 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-15 13:15:31,281 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:31,281 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-15 13:15:31,282 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/table 2023-07-15 13:15:31,282 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/table 2023-07-15 13:15:31,282 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-15 13:15:31,293 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/table/2a54995606bd4505a4407e782cd33ad0 2023-07-15 13:15:31,293 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:31,294 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740 2023-07-15 13:15:31,296 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740 2023-07-15 13:15:31,300 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-15 13:15:31,302 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-15 13:15:31,303 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=20; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11203326720, jitterRate=0.04339110851287842}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-15 13:15:31,303 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-15 13:15:31,305 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=18, masterSystemTime=1689426931224 2023-07-15 13:15:31,309 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-15 13:15:31,309 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38761,1689426926503, state=OPEN 2023-07-15 13:15:31,311 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-15 13:15:31,311 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-15 13:15:31,311 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 13:15:31,314 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=e42fabbf20609a275edbe64c71867bfc, regionState=CLOSED 2023-07-15 13:15:31,314 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689426931314"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426931314"}]},"ts":"1689426931314"} 2023-07-15 13:15:31,316 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34837] ipc.CallRunner(144): callId: 40 service: ClientService methodName: Mutate size: 213 connection: 172.31.14.131:41210 deadline: 1689426991315, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=38761 startCode=1689426926503. As of locationSeqNum=16. 2023-07-15 13:15:31,317 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=15 2023-07-15 13:15:31,317 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38761,1689426926503 in 241 msec 2023-07-15 13:15:31,319 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 815 msec 2023-07-15 13:15:31,417 DEBUG [PEWorker-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:15:31,421 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44972, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:15:31,431 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-15 13:15:31,431 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; CloseRegionProcedure e42fabbf20609a275edbe64c71867bfc, server=jenkins-hbase4.apache.org,34837,1689426926314 in 929 msec 2023-07-15 13:15:31,432 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=e42fabbf20609a275edbe64c71867bfc, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38761,1689426926503; forceNewPlan=false, retain=false 2023-07-15 13:15:31,465 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:31,467 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0ee65cdb74bf12e0dc6b097a112f439b, disabling compactions & flushes 2023-07-15 13:15:31,467 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:31,467 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:31,467 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. after waiting 0 ms 2023-07-15 13:15:31,467 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:31,467 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 0ee65cdb74bf12e0dc6b097a112f439b 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-15 13:15:31,500 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b/.tmp/info/96cfc1e7ed5c45faab2274ed9d6af4a9 2023-07-15 13:15:31,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b/.tmp/info/96cfc1e7ed5c45faab2274ed9d6af4a9 as hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b/info/96cfc1e7ed5c45faab2274ed9d6af4a9 2023-07-15 13:15:31,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-15 13:15:31,527 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b/info/96cfc1e7ed5c45faab2274ed9d6af4a9, entries=2, sequenceid=6, filesize=4.8 K 2023-07-15 13:15:31,528 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 0ee65cdb74bf12e0dc6b097a112f439b in 61ms, sequenceid=6, compaction requested=false 2023-07-15 13:15:31,529 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-15 13:15:31,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-15 13:15:31,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:31,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0ee65cdb74bf12e0dc6b097a112f439b: 2023-07-15 13:15:31,550 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 0ee65cdb74bf12e0dc6b097a112f439b move to jenkins-hbase4.apache.org,44807,1689426930103 record at close sequenceid=6 2023-07-15 13:15:31,553 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=0ee65cdb74bf12e0dc6b097a112f439b, regionState=CLOSED 2023-07-15 13:15:31,554 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689426931553"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426931553"}]},"ts":"1689426931553"} 2023-07-15 13:15:31,560 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:31,565 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-15 13:15:31,566 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; CloseRegionProcedure 0ee65cdb74bf12e0dc6b097a112f439b, server=jenkins-hbase4.apache.org,34837,1689426926314 in 1.0370 sec 2023-07-15 13:15:31,567 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=0ee65cdb74bf12e0dc6b097a112f439b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44807,1689426930103; forceNewPlan=false, retain=false 2023-07-15 13:15:31,568 INFO [jenkins-hbase4:40693] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-15 13:15:31,568 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=e42fabbf20609a275edbe64c71867bfc, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:31,569 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689426931568"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426931568"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426931568"}]},"ts":"1689426931568"} 2023-07-15 13:15:31,569 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=0ee65cdb74bf12e0dc6b097a112f439b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:31,570 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689426931569"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426931569"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426931569"}]},"ts":"1689426931569"} 2023-07-15 13:15:31,572 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=12, state=RUNNABLE; OpenRegionProcedure e42fabbf20609a275edbe64c71867bfc, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:31,573 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=13, state=RUNNABLE; OpenRegionProcedure 0ee65cdb74bf12e0dc6b097a112f439b, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:31,728 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:31,728 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 13:15:31,732 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33920, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 13:15:31,736 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:31,736 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e42fabbf20609a275edbe64c71867bfc, NAME => 'hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:31,736 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-15 13:15:31,737 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. service=MultiRowMutationService 2023-07-15 13:15:31,737 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-15 13:15:31,737 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:31,737 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:31,737 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:31,737 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:31,737 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0ee65cdb74bf12e0dc6b097a112f439b, NAME => 'hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:31,737 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:31,738 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:31,738 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:31,738 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:31,738 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:31,739 INFO [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:31,743 INFO [StoreOpener-0ee65cdb74bf12e0dc6b097a112f439b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:31,743 DEBUG [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/m 2023-07-15 13:15:31,743 DEBUG [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/m 2023-07-15 13:15:31,744 DEBUG [StoreOpener-0ee65cdb74bf12e0dc6b097a112f439b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b/info 2023-07-15 13:15:31,744 DEBUG [StoreOpener-0ee65cdb74bf12e0dc6b097a112f439b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b/info 2023-07-15 13:15:31,744 INFO [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e42fabbf20609a275edbe64c71867bfc columnFamilyName m 2023-07-15 13:15:31,744 INFO [StoreOpener-0ee65cdb74bf12e0dc6b097a112f439b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0ee65cdb74bf12e0dc6b097a112f439b columnFamilyName info 2023-07-15 13:15:31,755 DEBUG [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] regionserver.HStore(539): loaded hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/m/f63cee1a17324fa69b06e1ff700fd92a 2023-07-15 13:15:31,760 INFO [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] regionserver.HStore(310): Store=e42fabbf20609a275edbe64c71867bfc/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:31,762 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:31,762 DEBUG [StoreOpener-0ee65cdb74bf12e0dc6b097a112f439b-1] regionserver.HStore(539): loaded hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b/info/96cfc1e7ed5c45faab2274ed9d6af4a9 2023-07-15 13:15:31,763 INFO [StoreOpener-0ee65cdb74bf12e0dc6b097a112f439b-1] regionserver.HStore(310): Store=0ee65cdb74bf12e0dc6b097a112f439b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:31,764 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:31,765 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:31,766 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:31,770 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:31,772 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e42fabbf20609a275edbe64c71867bfc; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@7556a759, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:31,772 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:31,772 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e42fabbf20609a275edbe64c71867bfc: 2023-07-15 13:15:31,776 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc., pid=19, masterSystemTime=1689426931727 2023-07-15 13:15:31,777 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0ee65cdb74bf12e0dc6b097a112f439b; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11399811680, jitterRate=0.06169019639492035}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:31,777 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0ee65cdb74bf12e0dc6b097a112f439b: 2023-07-15 13:15:31,778 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b., pid=20, masterSystemTime=1689426931728 2023-07-15 13:15:31,783 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=e42fabbf20609a275edbe64c71867bfc, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:31,784 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689426931783"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426931783"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426931783"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426931783"}]},"ts":"1689426931783"} 2023-07-15 13:15:31,785 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:31,785 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:31,785 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:31,787 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:31,788 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=0ee65cdb74bf12e0dc6b097a112f439b, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:31,788 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689426931787"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426931787"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426931787"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426931787"}]},"ts":"1689426931787"} 2023-07-15 13:15:31,808 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=12 2023-07-15 13:15:31,808 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=12, state=SUCCESS; OpenRegionProcedure e42fabbf20609a275edbe64c71867bfc, server=jenkins-hbase4.apache.org,38761,1689426926503 in 217 msec 2023-07-15 13:15:31,811 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=13 2023-07-15 13:15:31,812 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=13, state=SUCCESS; OpenRegionProcedure 0ee65cdb74bf12e0dc6b097a112f439b, server=jenkins-hbase4.apache.org,44807,1689426930103 in 218 msec 2023-07-15 13:15:31,816 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=e42fabbf20609a275edbe64c71867bfc, REOPEN/MOVE in 1.3210 sec 2023-07-15 13:15:31,817 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=0ee65cdb74bf12e0dc6b097a112f439b, REOPEN/MOVE in 1.3220 sec 2023-07-15 13:15:32,520 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34837,1689426926314, jenkins-hbase4.apache.org,37679,1689426926099] are moved back to default 2023-07-15 13:15:32,520 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:32,520 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:32,523 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34837] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:57670 deadline: 1689426992522, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=38761 startCode=1689426926503. As of locationSeqNum=9. 2023-07-15 13:15:32,626 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34837] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:57670 deadline: 1689426992626, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=38761 startCode=1689426926503. As of locationSeqNum=16. 2023-07-15 13:15:32,729 DEBUG [hconnection-0x22934466-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:15:32,732 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44982, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:15:32,769 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:32,769 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:32,773 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:32,774 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:32,785 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:32,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:32,791 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:15:32,794 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34837] ipc.CallRunner(144): callId: 50 service: ClientService methodName: ExecService size: 619 connection: 172.31.14.131:41210 deadline: 1689426992794, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=38761 startCode=1689426926503. As of locationSeqNum=9. 2023-07-15 13:15:32,797 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 21 2023-07-15 13:15:32,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-15 13:15:32,901 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:32,902 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:32,903 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:32,903 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:32,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-15 13:15:32,911 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 13:15:32,918 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:32,919 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:32,919 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086 2023-07-15 13:15:32,919 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:32,920 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46 empty. 2023-07-15 13:15:32,920 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:32,920 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f empty. 2023-07-15 13:15:32,920 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086 empty. 2023-07-15 13:15:32,921 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9 empty. 2023-07-15 13:15:32,924 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208 empty. 2023-07-15 13:15:32,924 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086 2023-07-15 13:15:32,924 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:32,924 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:32,924 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:32,924 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:32,924 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-15 13:15:32,952 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:32,954 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 0f7d07568d421cbca320503a84958086, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:32,954 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 8844727d6df4b8d9b823e3faca5489d9, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:32,954 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => aea00e8f08a5ae412c1d320c570c4c46, NAME => 'Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:32,994 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:32,995 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 8844727d6df4b8d9b823e3faca5489d9, disabling compactions & flushes 2023-07-15 13:15:32,995 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:32,995 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:32,995 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. after waiting 0 ms 2023-07-15 13:15:32,995 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:32,995 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:32,996 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 8844727d6df4b8d9b823e3faca5489d9: 2023-07-15 13:15:32,997 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => e1569b942aa4f0e4d1bd934eb531919f, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:32,997 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:32,998 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 0f7d07568d421cbca320503a84958086, disabling compactions & flushes 2023-07-15 13:15:32,998 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:32,998 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:32,998 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:32,999 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing aea00e8f08a5ae412c1d320c570c4c46, disabling compactions & flushes 2023-07-15 13:15:32,999 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. after waiting 0 ms 2023-07-15 13:15:32,999 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:32,999 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:32,999 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:32,999 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:32,999 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. after waiting 0 ms 2023-07-15 13:15:32,999 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 0f7d07568d421cbca320503a84958086: 2023-07-15 13:15:32,999 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:32,999 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:32,999 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for aea00e8f08a5ae412c1d320c570c4c46: 2023-07-15 13:15:33,000 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => b1cbc749cd0d75a714d2d346cd086208, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:33,025 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:33,026 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing e1569b942aa4f0e4d1bd934eb531919f, disabling compactions & flushes 2023-07-15 13:15:33,026 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:33,026 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:33,026 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. after waiting 0 ms 2023-07-15 13:15:33,026 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:33,026 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:33,027 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing b1cbc749cd0d75a714d2d346cd086208, disabling compactions & flushes 2023-07-15 13:15:33,027 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:33,027 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:33,027 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for e1569b942aa4f0e4d1bd934eb531919f: 2023-07-15 13:15:33,027 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:33,027 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. after waiting 0 ms 2023-07-15 13:15:33,027 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:33,028 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:33,028 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for b1cbc749cd0d75a714d2d346cd086208: 2023-07-15 13:15:33,033 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 13:15:33,035 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426933034"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426933034"}]},"ts":"1689426933034"} 2023-07-15 13:15:33,035 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689426932782.0f7d07568d421cbca320503a84958086.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426933034"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426933034"}]},"ts":"1689426933034"} 2023-07-15 13:15:33,035 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426933034"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426933034"}]},"ts":"1689426933034"} 2023-07-15 13:15:33,035 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426933034"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426933034"}]},"ts":"1689426933034"} 2023-07-15 13:15:33,036 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426933034"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426933034"}]},"ts":"1689426933034"} 2023-07-15 13:15:33,094 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-15 13:15:33,096 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 13:15:33,096 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426933096"}]},"ts":"1689426933096"} 2023-07-15 13:15:33,104 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-15 13:15:33,110 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:33,111 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:33,111 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:33,111 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:33,111 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea00e8f08a5ae412c1d320c570c4c46, ASSIGN}, {pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8844727d6df4b8d9b823e3faca5489d9, ASSIGN}, {pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0f7d07568d421cbca320503a84958086, ASSIGN}, {pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e1569b942aa4f0e4d1bd934eb531919f, ASSIGN}, {pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1cbc749cd0d75a714d2d346cd086208, ASSIGN}] 2023-07-15 13:15:33,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-15 13:15:33,115 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8844727d6df4b8d9b823e3faca5489d9, ASSIGN 2023-07-15 13:15:33,116 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0f7d07568d421cbca320503a84958086, ASSIGN 2023-07-15 13:15:33,118 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea00e8f08a5ae412c1d320c570c4c46, ASSIGN 2023-07-15 13:15:33,118 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0f7d07568d421cbca320503a84958086, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38761,1689426926503; forceNewPlan=false, retain=false 2023-07-15 13:15:33,118 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1cbc749cd0d75a714d2d346cd086208, ASSIGN 2023-07-15 13:15:33,118 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8844727d6df4b8d9b823e3faca5489d9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44807,1689426930103; forceNewPlan=false, retain=false 2023-07-15 13:15:33,120 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e1569b942aa4f0e4d1bd934eb531919f, ASSIGN 2023-07-15 13:15:33,120 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea00e8f08a5ae412c1d320c570c4c46, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44807,1689426930103; forceNewPlan=false, retain=false 2023-07-15 13:15:33,120 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1cbc749cd0d75a714d2d346cd086208, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44807,1689426930103; forceNewPlan=false, retain=false 2023-07-15 13:15:33,138 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e1569b942aa4f0e4d1bd934eb531919f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38761,1689426926503; forceNewPlan=false, retain=false 2023-07-15 13:15:33,268 INFO [jenkins-hbase4:40693] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-15 13:15:33,272 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=8844727d6df4b8d9b823e3faca5489d9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:33,272 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=e1569b942aa4f0e4d1bd934eb531919f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:33,272 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=0f7d07568d421cbca320503a84958086, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:33,272 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=b1cbc749cd0d75a714d2d346cd086208, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:33,272 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=aea00e8f08a5ae412c1d320c570c4c46, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:33,273 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689426932782.0f7d07568d421cbca320503a84958086.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426933272"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426933272"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426933272"}]},"ts":"1689426933272"} 2023-07-15 13:15:33,273 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426933272"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426933272"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426933272"}]},"ts":"1689426933272"} 2023-07-15 13:15:33,273 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426933272"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426933272"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426933272"}]},"ts":"1689426933272"} 2023-07-15 13:15:33,273 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426933272"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426933272"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426933272"}]},"ts":"1689426933272"} 2023-07-15 13:15:33,272 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426933272"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426933272"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426933272"}]},"ts":"1689426933272"} 2023-07-15 13:15:33,277 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=24, state=RUNNABLE; OpenRegionProcedure 0f7d07568d421cbca320503a84958086, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:33,278 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=26, state=RUNNABLE; OpenRegionProcedure b1cbc749cd0d75a714d2d346cd086208, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:33,282 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=22, state=RUNNABLE; OpenRegionProcedure aea00e8f08a5ae412c1d320c570c4c46, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:33,286 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=23, state=RUNNABLE; OpenRegionProcedure 8844727d6df4b8d9b823e3faca5489d9, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:33,286 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=25, state=RUNNABLE; OpenRegionProcedure e1569b942aa4f0e4d1bd934eb531919f, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:33,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-15 13:15:33,438 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:33,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0f7d07568d421cbca320503a84958086, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-15 13:15:33,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 0f7d07568d421cbca320503a84958086 2023-07-15 13:15:33,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:33,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0f7d07568d421cbca320503a84958086 2023-07-15 13:15:33,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0f7d07568d421cbca320503a84958086 2023-07-15 13:15:33,441 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:33,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8844727d6df4b8d9b823e3faca5489d9, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-15 13:15:33,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:33,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:33,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:33,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:33,442 INFO [StoreOpener-0f7d07568d421cbca320503a84958086-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0f7d07568d421cbca320503a84958086 2023-07-15 13:15:33,445 DEBUG [StoreOpener-0f7d07568d421cbca320503a84958086-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086/f 2023-07-15 13:15:33,445 DEBUG [StoreOpener-0f7d07568d421cbca320503a84958086-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086/f 2023-07-15 13:15:33,446 INFO [StoreOpener-0f7d07568d421cbca320503a84958086-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0f7d07568d421cbca320503a84958086 columnFamilyName f 2023-07-15 13:15:33,447 INFO [StoreOpener-0f7d07568d421cbca320503a84958086-1] regionserver.HStore(310): Store=0f7d07568d421cbca320503a84958086/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:33,447 INFO [StoreOpener-8844727d6df4b8d9b823e3faca5489d9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:33,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086 2023-07-15 13:15:33,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086 2023-07-15 13:15:33,451 DEBUG [StoreOpener-8844727d6df4b8d9b823e3faca5489d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9/f 2023-07-15 13:15:33,452 DEBUG [StoreOpener-8844727d6df4b8d9b823e3faca5489d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9/f 2023-07-15 13:15:33,454 INFO [StoreOpener-8844727d6df4b8d9b823e3faca5489d9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8844727d6df4b8d9b823e3faca5489d9 columnFamilyName f 2023-07-15 13:15:33,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0f7d07568d421cbca320503a84958086 2023-07-15 13:15:33,455 INFO [StoreOpener-8844727d6df4b8d9b823e3faca5489d9-1] regionserver.HStore(310): Store=8844727d6df4b8d9b823e3faca5489d9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:33,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:33,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:33,461 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:33,462 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0f7d07568d421cbca320503a84958086; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10598916160, jitterRate=-0.012899011373519897}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:33,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0f7d07568d421cbca320503a84958086: 2023-07-15 13:15:33,464 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:33,464 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086., pid=27, masterSystemTime=1689426933431 2023-07-15 13:15:33,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:33,468 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:33,469 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:33,469 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e1569b942aa4f0e4d1bd934eb531919f, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-15 13:15:33,469 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:33,469 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:33,469 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:33,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:33,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:33,471 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=0f7d07568d421cbca320503a84958086, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:33,471 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689426932782.0f7d07568d421cbca320503a84958086.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426933471"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426933471"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426933471"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426933471"}]},"ts":"1689426933471"} 2023-07-15 13:15:33,472 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8844727d6df4b8d9b823e3faca5489d9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11308891520, jitterRate=0.053222596645355225}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:33,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8844727d6df4b8d9b823e3faca5489d9: 2023-07-15 13:15:33,473 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9., pid=30, masterSystemTime=1689426933434 2023-07-15 13:15:33,474 INFO [StoreOpener-e1569b942aa4f0e4d1bd934eb531919f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:33,478 DEBUG [StoreOpener-e1569b942aa4f0e4d1bd934eb531919f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f/f 2023-07-15 13:15:33,478 DEBUG [StoreOpener-e1569b942aa4f0e4d1bd934eb531919f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f/f 2023-07-15 13:15:33,479 INFO [StoreOpener-e1569b942aa4f0e4d1bd934eb531919f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e1569b942aa4f0e4d1bd934eb531919f columnFamilyName f 2023-07-15 13:15:33,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:33,480 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:33,480 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:33,480 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=8844727d6df4b8d9b823e3faca5489d9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:33,481 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426933480"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426933480"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426933480"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426933480"}]},"ts":"1689426933480"} 2023-07-15 13:15:33,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b1cbc749cd0d75a714d2d346cd086208, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-15 13:15:33,482 INFO [StoreOpener-e1569b942aa4f0e4d1bd934eb531919f-1] regionserver.HStore(310): Store=e1569b942aa4f0e4d1bd934eb531919f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:33,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:33,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:33,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:33,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:33,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:33,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:33,486 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=24 2023-07-15 13:15:33,486 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=24, state=SUCCESS; OpenRegionProcedure 0f7d07568d421cbca320503a84958086, server=jenkins-hbase4.apache.org,38761,1689426926503 in 199 msec 2023-07-15 13:15:33,487 INFO [StoreOpener-b1cbc749cd0d75a714d2d346cd086208-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:33,490 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0f7d07568d421cbca320503a84958086, ASSIGN in 375 msec 2023-07-15 13:15:33,492 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=23 2023-07-15 13:15:33,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:33,492 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=23, state=SUCCESS; OpenRegionProcedure 8844727d6df4b8d9b823e3faca5489d9, server=jenkins-hbase4.apache.org,44807,1689426930103 in 200 msec 2023-07-15 13:15:33,492 DEBUG [StoreOpener-b1cbc749cd0d75a714d2d346cd086208-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208/f 2023-07-15 13:15:33,492 DEBUG [StoreOpener-b1cbc749cd0d75a714d2d346cd086208-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208/f 2023-07-15 13:15:33,493 INFO [StoreOpener-b1cbc749cd0d75a714d2d346cd086208-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b1cbc749cd0d75a714d2d346cd086208 columnFamilyName f 2023-07-15 13:15:33,494 INFO [StoreOpener-b1cbc749cd0d75a714d2d346cd086208-1] regionserver.HStore(310): Store=b1cbc749cd0d75a714d2d346cd086208/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:33,495 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8844727d6df4b8d9b823e3faca5489d9, ASSIGN in 381 msec 2023-07-15 13:15:33,505 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:33,506 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:33,506 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e1569b942aa4f0e4d1bd934eb531919f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10383287200, jitterRate=-0.0329810231924057}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:33,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e1569b942aa4f0e4d1bd934eb531919f: 2023-07-15 13:15:33,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:33,508 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f., pid=31, masterSystemTime=1689426933431 2023-07-15 13:15:33,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:33,510 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:33,512 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=e1569b942aa4f0e4d1bd934eb531919f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:33,512 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426933511"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426933511"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426933511"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426933511"}]},"ts":"1689426933511"} 2023-07-15 13:15:33,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:33,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:33,518 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=25 2023-07-15 13:15:33,518 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=25, state=SUCCESS; OpenRegionProcedure e1569b942aa4f0e4d1bd934eb531919f, server=jenkins-hbase4.apache.org,38761,1689426926503 in 228 msec 2023-07-15 13:15:33,519 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b1cbc749cd0d75a714d2d346cd086208; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11436536960, jitterRate=0.06511050462722778}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:33,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b1cbc749cd0d75a714d2d346cd086208: 2023-07-15 13:15:33,521 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208., pid=28, masterSystemTime=1689426933434 2023-07-15 13:15:33,521 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e1569b942aa4f0e4d1bd934eb531919f, ASSIGN in 407 msec 2023-07-15 13:15:33,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:33,523 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:33,523 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:33,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aea00e8f08a5ae412c1d320c570c4c46, NAME => 'Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-15 13:15:33,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:33,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:33,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:33,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:33,525 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=b1cbc749cd0d75a714d2d346cd086208, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:33,525 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426933525"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426933525"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426933525"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426933525"}]},"ts":"1689426933525"} 2023-07-15 13:15:33,526 INFO [StoreOpener-aea00e8f08a5ae412c1d320c570c4c46-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:33,529 DEBUG [StoreOpener-aea00e8f08a5ae412c1d320c570c4c46-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46/f 2023-07-15 13:15:33,529 DEBUG [StoreOpener-aea00e8f08a5ae412c1d320c570c4c46-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46/f 2023-07-15 13:15:33,529 INFO [StoreOpener-aea00e8f08a5ae412c1d320c570c4c46-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aea00e8f08a5ae412c1d320c570c4c46 columnFamilyName f 2023-07-15 13:15:33,531 INFO [StoreOpener-aea00e8f08a5ae412c1d320c570c4c46-1] regionserver.HStore(310): Store=aea00e8f08a5ae412c1d320c570c4c46/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:33,536 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=26 2023-07-15 13:15:33,536 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=26, state=SUCCESS; OpenRegionProcedure b1cbc749cd0d75a714d2d346cd086208, server=jenkins-hbase4.apache.org,44807,1689426930103 in 250 msec 2023-07-15 13:15:33,536 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:33,537 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:33,538 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1cbc749cd0d75a714d2d346cd086208, ASSIGN in 425 msec 2023-07-15 13:15:33,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:33,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:33,544 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened aea00e8f08a5ae412c1d320c570c4c46; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9470204640, jitterRate=-0.11801846325397491}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:33,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for aea00e8f08a5ae412c1d320c570c4c46: 2023-07-15 13:15:33,545 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46., pid=29, masterSystemTime=1689426933434 2023-07-15 13:15:33,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:33,547 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:33,547 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=aea00e8f08a5ae412c1d320c570c4c46, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:33,548 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426933547"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426933547"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426933547"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426933547"}]},"ts":"1689426933547"} 2023-07-15 13:15:33,552 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=22 2023-07-15 13:15:33,552 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=22, state=SUCCESS; OpenRegionProcedure aea00e8f08a5ae412c1d320c570c4c46, server=jenkins-hbase4.apache.org,44807,1689426930103 in 268 msec 2023-07-15 13:15:33,555 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-15 13:15:33,556 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea00e8f08a5ae412c1d320c570c4c46, ASSIGN in 441 msec 2023-07-15 13:15:33,557 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 13:15:33,557 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426933557"}]},"ts":"1689426933557"} 2023-07-15 13:15:33,559 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-15 13:15:33,564 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 13:15:33,568 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 779 msec 2023-07-15 13:15:33,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-15 13:15:33,916 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 21 completed 2023-07-15 13:15:33,916 DEBUG [Listener at localhost/38739] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-15 13:15:33,917 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:33,919 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34837] ipc.CallRunner(144): callId: 49 service: ClientService methodName: Scan size: 95 connection: 172.31.14.131:41232 deadline: 1689426993919, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=38761 startCode=1689426926503. As of locationSeqNum=16. 2023-07-15 13:15:34,022 DEBUG [hconnection-0x4925918a-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:15:34,026 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44998, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:15:34,042 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-15 13:15:34,043 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:34,043 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-15 13:15:34,043 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:34,048 DEBUG [Listener at localhost/38739] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 13:15:34,052 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57686, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 13:15:34,055 DEBUG [Listener at localhost/38739] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 13:15:34,061 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39966, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 13:15:34,061 DEBUG [Listener at localhost/38739] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 13:15:34,067 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45012, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 13:15:34,069 DEBUG [Listener at localhost/38739] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 13:15:34,071 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33930, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 13:15:34,083 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:34,084 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 13:15:34,085 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:34,096 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:34,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:34,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:34,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:34,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:34,111 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:34,111 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(345): Moving region aea00e8f08a5ae412c1d320c570c4c46 to RSGroup Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:34,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:34,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:34,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:34,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:34,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:34,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea00e8f08a5ae412c1d320c570c4c46, REOPEN/MOVE 2023-07-15 13:15:34,113 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(345): Moving region 8844727d6df4b8d9b823e3faca5489d9 to RSGroup Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:34,113 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea00e8f08a5ae412c1d320c570c4c46, REOPEN/MOVE 2023-07-15 13:15:34,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:34,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:34,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:34,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:34,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:34,115 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=aea00e8f08a5ae412c1d320c570c4c46, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:34,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8844727d6df4b8d9b823e3faca5489d9, REOPEN/MOVE 2023-07-15 13:15:34,115 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426934115"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426934115"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426934115"}]},"ts":"1689426934115"} 2023-07-15 13:15:34,115 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(345): Moving region 0f7d07568d421cbca320503a84958086 to RSGroup Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:34,116 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8844727d6df4b8d9b823e3faca5489d9, REOPEN/MOVE 2023-07-15 13:15:34,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:34,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:34,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:34,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:34,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:34,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0f7d07568d421cbca320503a84958086, REOPEN/MOVE 2023-07-15 13:15:34,119 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=8844727d6df4b8d9b823e3faca5489d9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:34,119 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(345): Moving region e1569b942aa4f0e4d1bd934eb531919f to RSGroup Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:34,119 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426934119"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426934119"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426934119"}]},"ts":"1689426934119"} 2023-07-15 13:15:34,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:34,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:34,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:34,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:34,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:34,121 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0f7d07568d421cbca320503a84958086, REOPEN/MOVE 2023-07-15 13:15:34,122 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=32, state=RUNNABLE; CloseRegionProcedure aea00e8f08a5ae412c1d320c570c4c46, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:34,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e1569b942aa4f0e4d1bd934eb531919f, REOPEN/MOVE 2023-07-15 13:15:34,122 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(345): Moving region b1cbc749cd0d75a714d2d346cd086208 to RSGroup Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:34,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:34,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:34,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:34,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:34,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:34,123 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=0f7d07568d421cbca320503a84958086, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:34,123 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689426932782.0f7d07568d421cbca320503a84958086.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426934123"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426934123"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426934123"}]},"ts":"1689426934123"} 2023-07-15 13:15:34,125 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=33, state=RUNNABLE; CloseRegionProcedure 8844727d6df4b8d9b823e3faca5489d9, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:34,126 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=34, state=RUNNABLE; CloseRegionProcedure 0f7d07568d421cbca320503a84958086, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:34,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=37, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1cbc749cd0d75a714d2d346cd086208, REOPEN/MOVE 2023-07-15 13:15:34,127 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_348588243, current retry=0 2023-07-15 13:15:34,128 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e1569b942aa4f0e4d1bd934eb531919f, REOPEN/MOVE 2023-07-15 13:15:34,131 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=37, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1cbc749cd0d75a714d2d346cd086208, REOPEN/MOVE 2023-07-15 13:15:34,140 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=e1569b942aa4f0e4d1bd934eb531919f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:34,140 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426934140"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426934140"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426934140"}]},"ts":"1689426934140"} 2023-07-15 13:15:34,141 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=37 updating hbase:meta row=b1cbc749cd0d75a714d2d346cd086208, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:34,141 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426934141"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426934141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426934141"}]},"ts":"1689426934141"} 2023-07-15 13:15:34,147 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=35, state=RUNNABLE; CloseRegionProcedure e1569b942aa4f0e4d1bd934eb531919f, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:34,148 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=37, state=RUNNABLE; CloseRegionProcedure b1cbc749cd0d75a714d2d346cd086208, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:34,279 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:34,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b1cbc749cd0d75a714d2d346cd086208, disabling compactions & flushes 2023-07-15 13:15:34,281 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:34,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:34,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. after waiting 0 ms 2023-07-15 13:15:34,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:34,303 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0f7d07568d421cbca320503a84958086 2023-07-15 13:15:34,307 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0f7d07568d421cbca320503a84958086, disabling compactions & flushes 2023-07-15 13:15:34,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:34,308 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:34,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:34,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. after waiting 0 ms 2023-07-15 13:15:34,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:34,310 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:34,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b1cbc749cd0d75a714d2d346cd086208: 2023-07-15 13:15:34,310 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b1cbc749cd0d75a714d2d346cd086208 move to jenkins-hbase4.apache.org,34837,1689426926314 record at close sequenceid=2 2023-07-15 13:15:34,314 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:34,315 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:34,316 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing aea00e8f08a5ae412c1d320c570c4c46, disabling compactions & flushes 2023-07-15 13:15:34,316 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:34,316 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:34,316 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. after waiting 0 ms 2023-07-15 13:15:34,316 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:34,319 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=37 updating hbase:meta row=b1cbc749cd0d75a714d2d346cd086208, regionState=CLOSED 2023-07-15 13:15:34,319 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426934319"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426934319"}]},"ts":"1689426934319"} 2023-07-15 13:15:34,330 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=37 2023-07-15 13:15:34,330 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=37, state=SUCCESS; CloseRegionProcedure b1cbc749cd0d75a714d2d346cd086208, server=jenkins-hbase4.apache.org,44807,1689426930103 in 176 msec 2023-07-15 13:15:34,333 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=37, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1cbc749cd0d75a714d2d346cd086208, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34837,1689426926314; forceNewPlan=false, retain=false 2023-07-15 13:15:34,342 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:34,345 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:34,345 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for aea00e8f08a5ae412c1d320c570c4c46: 2023-07-15 13:15:34,345 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding aea00e8f08a5ae412c1d320c570c4c46 move to jenkins-hbase4.apache.org,37679,1689426926099 record at close sequenceid=2 2023-07-15 13:15:34,346 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:34,349 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:34,350 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0f7d07568d421cbca320503a84958086: 2023-07-15 13:15:34,350 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 0f7d07568d421cbca320503a84958086 move to jenkins-hbase4.apache.org,34837,1689426926314 record at close sequenceid=2 2023-07-15 13:15:34,351 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:34,351 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:34,353 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8844727d6df4b8d9b823e3faca5489d9, disabling compactions & flushes 2023-07-15 13:15:34,353 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:34,353 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:34,353 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. after waiting 0 ms 2023-07-15 13:15:34,353 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:34,355 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0f7d07568d421cbca320503a84958086 2023-07-15 13:15:34,356 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:34,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e1569b942aa4f0e4d1bd934eb531919f, disabling compactions & flushes 2023-07-15 13:15:34,357 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:34,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:34,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. after waiting 0 ms 2023-07-15 13:15:34,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:34,359 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=aea00e8f08a5ae412c1d320c570c4c46, regionState=CLOSED 2023-07-15 13:15:34,359 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=0f7d07568d421cbca320503a84958086, regionState=CLOSED 2023-07-15 13:15:34,359 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689426932782.0f7d07568d421cbca320503a84958086.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426934359"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426934359"}]},"ts":"1689426934359"} 2023-07-15 13:15:34,359 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426934359"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426934359"}]},"ts":"1689426934359"} 2023-07-15 13:15:34,374 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=34 2023-07-15 13:15:34,374 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=34, state=SUCCESS; CloseRegionProcedure 0f7d07568d421cbca320503a84958086, server=jenkins-hbase4.apache.org,38761,1689426926503 in 243 msec 2023-07-15 13:15:34,376 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=32 2023-07-15 13:15:34,376 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=32, state=SUCCESS; CloseRegionProcedure aea00e8f08a5ae412c1d320c570c4c46, server=jenkins-hbase4.apache.org,44807,1689426930103 in 248 msec 2023-07-15 13:15:34,377 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0f7d07568d421cbca320503a84958086, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34837,1689426926314; forceNewPlan=false, retain=false 2023-07-15 13:15:34,377 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea00e8f08a5ae412c1d320c570c4c46, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37679,1689426926099; forceNewPlan=false, retain=false 2023-07-15 13:15:34,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:34,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:34,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8844727d6df4b8d9b823e3faca5489d9: 2023-07-15 13:15:34,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 8844727d6df4b8d9b823e3faca5489d9 move to jenkins-hbase4.apache.org,34837,1689426926314 record at close sequenceid=2 2023-07-15 13:15:34,398 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:34,399 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=8844727d6df4b8d9b823e3faca5489d9, regionState=CLOSED 2023-07-15 13:15:34,399 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:34,399 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426934399"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426934399"}]},"ts":"1689426934399"} 2023-07-15 13:15:34,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:34,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e1569b942aa4f0e4d1bd934eb531919f: 2023-07-15 13:15:34,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e1569b942aa4f0e4d1bd934eb531919f move to jenkins-hbase4.apache.org,37679,1689426926099 record at close sequenceid=2 2023-07-15 13:15:34,403 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:34,404 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=e1569b942aa4f0e4d1bd934eb531919f, regionState=CLOSED 2023-07-15 13:15:34,404 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426934404"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426934404"}]},"ts":"1689426934404"} 2023-07-15 13:15:34,405 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=33 2023-07-15 13:15:34,405 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=33, state=SUCCESS; CloseRegionProcedure 8844727d6df4b8d9b823e3faca5489d9, server=jenkins-hbase4.apache.org,44807,1689426930103 in 276 msec 2023-07-15 13:15:34,407 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8844727d6df4b8d9b823e3faca5489d9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34837,1689426926314; forceNewPlan=false, retain=false 2023-07-15 13:15:34,410 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=35 2023-07-15 13:15:34,411 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=35, state=SUCCESS; CloseRegionProcedure e1569b942aa4f0e4d1bd934eb531919f, server=jenkins-hbase4.apache.org,38761,1689426926503 in 260 msec 2023-07-15 13:15:34,412 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e1569b942aa4f0e4d1bd934eb531919f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37679,1689426926099; forceNewPlan=false, retain=false 2023-07-15 13:15:34,483 INFO [jenkins-hbase4:40693] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-15 13:15:34,484 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=e1569b942aa4f0e4d1bd934eb531919f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:34,484 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426934484"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426934484"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426934484"}]},"ts":"1689426934484"} 2023-07-15 13:15:34,485 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=aea00e8f08a5ae412c1d320c570c4c46, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:34,485 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426934485"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426934485"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426934485"}]},"ts":"1689426934485"} 2023-07-15 13:15:34,485 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=37 updating hbase:meta row=b1cbc749cd0d75a714d2d346cd086208, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:34,486 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426934485"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426934485"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426934485"}]},"ts":"1689426934485"} 2023-07-15 13:15:34,486 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=0f7d07568d421cbca320503a84958086, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:34,486 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689426932782.0f7d07568d421cbca320503a84958086.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426934486"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426934486"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426934486"}]},"ts":"1689426934486"} 2023-07-15 13:15:34,487 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=8844727d6df4b8d9b823e3faca5489d9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:34,487 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426934487"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426934487"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426934487"}]},"ts":"1689426934487"} 2023-07-15 13:15:34,488 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=35, state=RUNNABLE; OpenRegionProcedure e1569b942aa4f0e4d1bd934eb531919f, server=jenkins-hbase4.apache.org,37679,1689426926099}] 2023-07-15 13:15:34,490 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=32, state=RUNNABLE; OpenRegionProcedure aea00e8f08a5ae412c1d320c570c4c46, server=jenkins-hbase4.apache.org,37679,1689426926099}] 2023-07-15 13:15:34,493 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=37, state=RUNNABLE; OpenRegionProcedure b1cbc749cd0d75a714d2d346cd086208, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:34,496 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=34, state=RUNNABLE; OpenRegionProcedure 0f7d07568d421cbca320503a84958086, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:34,497 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=33, state=RUNNABLE; OpenRegionProcedure 8844727d6df4b8d9b823e3faca5489d9, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:34,650 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:34,650 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 13:15:34,679 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:34,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b1cbc749cd0d75a714d2d346cd086208, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-15 13:15:34,680 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39972, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 13:15:34,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:34,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:34,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:34,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:34,683 INFO [StoreOpener-b1cbc749cd0d75a714d2d346cd086208-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:34,686 DEBUG [StoreOpener-b1cbc749cd0d75a714d2d346cd086208-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208/f 2023-07-15 13:15:34,686 DEBUG [StoreOpener-b1cbc749cd0d75a714d2d346cd086208-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208/f 2023-07-15 13:15:34,686 INFO [StoreOpener-b1cbc749cd0d75a714d2d346cd086208-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b1cbc749cd0d75a714d2d346cd086208 columnFamilyName f 2023-07-15 13:15:34,687 INFO [StoreOpener-b1cbc749cd0d75a714d2d346cd086208-1] regionserver.HStore(310): Store=b1cbc749cd0d75a714d2d346cd086208/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:34,690 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:34,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aea00e8f08a5ae412c1d320c570c4c46, NAME => 'Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-15 13:15:34,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:34,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:34,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:34,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:34,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:34,696 INFO [StoreOpener-aea00e8f08a5ae412c1d320c570c4c46-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:34,700 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-15 13:15:34,701 DEBUG [StoreOpener-aea00e8f08a5ae412c1d320c570c4c46-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46/f 2023-07-15 13:15:34,701 DEBUG [StoreOpener-aea00e8f08a5ae412c1d320c570c4c46-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46/f 2023-07-15 13:15:34,701 INFO [StoreOpener-aea00e8f08a5ae412c1d320c570c4c46-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aea00e8f08a5ae412c1d320c570c4c46 columnFamilyName f 2023-07-15 13:15:34,702 INFO [StoreOpener-aea00e8f08a5ae412c1d320c570c4c46-1] regionserver.HStore(310): Store=aea00e8f08a5ae412c1d320c570c4c46/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:34,707 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:34,708 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:34,714 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:34,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:34,719 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b1cbc749cd0d75a714d2d346cd086208; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11278821600, jitterRate=0.05042211711406708}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:34,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b1cbc749cd0d75a714d2d346cd086208: 2023-07-15 13:15:34,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:34,721 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208., pid=44, masterSystemTime=1689426934655 2023-07-15 13:15:34,723 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened aea00e8f08a5ae412c1d320c570c4c46; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10683598240, jitterRate=-0.005012378096580505}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:34,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for aea00e8f08a5ae412c1d320c570c4c46: 2023-07-15 13:15:34,724 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46., pid=43, masterSystemTime=1689426934649 2023-07-15 13:15:34,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:34,735 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:34,735 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:34,735 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=37 updating hbase:meta row=b1cbc749cd0d75a714d2d346cd086208, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:34,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0f7d07568d421cbca320503a84958086, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-15 13:15:34,736 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426934735"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426934735"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426934735"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426934735"}]},"ts":"1689426934735"} 2023-07-15 13:15:34,736 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:34,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 0f7d07568d421cbca320503a84958086 2023-07-15 13:15:34,738 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=aea00e8f08a5ae412c1d320c570c4c46, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:34,738 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426934737"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426934737"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426934737"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426934737"}]},"ts":"1689426934737"} 2023-07-15 13:15:34,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:34,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0f7d07568d421cbca320503a84958086 2023-07-15 13:15:34,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0f7d07568d421cbca320503a84958086 2023-07-15 13:15:34,740 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:34,740 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:34,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e1569b942aa4f0e4d1bd934eb531919f, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-15 13:15:34,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:34,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:34,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:34,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:34,742 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=37 2023-07-15 13:15:34,743 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=37, state=SUCCESS; OpenRegionProcedure b1cbc749cd0d75a714d2d346cd086208, server=jenkins-hbase4.apache.org,34837,1689426926314 in 246 msec 2023-07-15 13:15:34,744 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=32 2023-07-15 13:15:34,744 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=32, state=SUCCESS; OpenRegionProcedure aea00e8f08a5ae412c1d320c570c4c46, server=jenkins-hbase4.apache.org,37679,1689426926099 in 250 msec 2023-07-15 13:15:34,746 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=37, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1cbc749cd0d75a714d2d346cd086208, REOPEN/MOVE in 620 msec 2023-07-15 13:15:34,747 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea00e8f08a5ae412c1d320c570c4c46, REOPEN/MOVE in 633 msec 2023-07-15 13:15:34,747 INFO [StoreOpener-0f7d07568d421cbca320503a84958086-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0f7d07568d421cbca320503a84958086 2023-07-15 13:15:34,747 INFO [StoreOpener-e1569b942aa4f0e4d1bd934eb531919f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:34,749 DEBUG [StoreOpener-e1569b942aa4f0e4d1bd934eb531919f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f/f 2023-07-15 13:15:34,749 DEBUG [StoreOpener-e1569b942aa4f0e4d1bd934eb531919f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f/f 2023-07-15 13:15:34,749 DEBUG [StoreOpener-0f7d07568d421cbca320503a84958086-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086/f 2023-07-15 13:15:34,750 DEBUG [StoreOpener-0f7d07568d421cbca320503a84958086-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086/f 2023-07-15 13:15:34,750 INFO [StoreOpener-e1569b942aa4f0e4d1bd934eb531919f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e1569b942aa4f0e4d1bd934eb531919f columnFamilyName f 2023-07-15 13:15:34,750 INFO [StoreOpener-0f7d07568d421cbca320503a84958086-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0f7d07568d421cbca320503a84958086 columnFamilyName f 2023-07-15 13:15:34,750 INFO [StoreOpener-e1569b942aa4f0e4d1bd934eb531919f-1] regionserver.HStore(310): Store=e1569b942aa4f0e4d1bd934eb531919f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:34,751 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:34,753 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:34,756 INFO [StoreOpener-0f7d07568d421cbca320503a84958086-1] regionserver.HStore(310): Store=0f7d07568d421cbca320503a84958086/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:34,758 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086 2023-07-15 13:15:34,760 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:34,762 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e1569b942aa4f0e4d1bd934eb531919f; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9595614080, jitterRate=-0.10633879899978638}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:34,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e1569b942aa4f0e4d1bd934eb531919f: 2023-07-15 13:15:34,763 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f., pid=42, masterSystemTime=1689426934649 2023-07-15 13:15:34,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086 2023-07-15 13:15:34,767 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=e1569b942aa4f0e4d1bd934eb531919f, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:34,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:34,767 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:34,767 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426934767"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426934767"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426934767"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426934767"}]},"ts":"1689426934767"} 2023-07-15 13:15:34,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0f7d07568d421cbca320503a84958086 2023-07-15 13:15:34,772 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0f7d07568d421cbca320503a84958086; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10365255200, jitterRate=-0.03466038405895233}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:34,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0f7d07568d421cbca320503a84958086: 2023-07-15 13:15:34,773 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086., pid=45, masterSystemTime=1689426934655 2023-07-15 13:15:34,773 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=35 2023-07-15 13:15:34,774 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=35, state=SUCCESS; OpenRegionProcedure e1569b942aa4f0e4d1bd934eb531919f, server=jenkins-hbase4.apache.org,37679,1689426926099 in 282 msec 2023-07-15 13:15:34,819 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=35, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e1569b942aa4f0e4d1bd934eb531919f, REOPEN/MOVE in 655 msec 2023-07-15 13:15:34,819 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:34,820 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:34,820 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:34,820 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8844727d6df4b8d9b823e3faca5489d9, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-15 13:15:34,820 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:34,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:34,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:34,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:34,827 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=0f7d07568d421cbca320503a84958086, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:34,827 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689426932782.0f7d07568d421cbca320503a84958086.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426934827"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426934827"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426934827"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426934827"}]},"ts":"1689426934827"} 2023-07-15 13:15:34,833 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=34 2023-07-15 13:15:34,833 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=34, state=SUCCESS; OpenRegionProcedure 0f7d07568d421cbca320503a84958086, server=jenkins-hbase4.apache.org,34837,1689426926314 in 334 msec 2023-07-15 13:15:34,837 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=34, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0f7d07568d421cbca320503a84958086, REOPEN/MOVE in 717 msec 2023-07-15 13:15:34,839 INFO [StoreOpener-8844727d6df4b8d9b823e3faca5489d9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:34,842 DEBUG [StoreOpener-8844727d6df4b8d9b823e3faca5489d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9/f 2023-07-15 13:15:34,842 DEBUG [StoreOpener-8844727d6df4b8d9b823e3faca5489d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9/f 2023-07-15 13:15:34,842 INFO [StoreOpener-8844727d6df4b8d9b823e3faca5489d9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8844727d6df4b8d9b823e3faca5489d9 columnFamilyName f 2023-07-15 13:15:34,843 INFO [StoreOpener-8844727d6df4b8d9b823e3faca5489d9-1] regionserver.HStore(310): Store=8844727d6df4b8d9b823e3faca5489d9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:34,844 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:34,846 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:34,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:34,857 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8844727d6df4b8d9b823e3faca5489d9; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10426158080, jitterRate=-0.028988361358642578}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:34,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8844727d6df4b8d9b823e3faca5489d9: 2023-07-15 13:15:34,858 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9., pid=46, masterSystemTime=1689426934655 2023-07-15 13:15:34,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:34,862 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:34,864 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=8844727d6df4b8d9b823e3faca5489d9, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:34,864 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426934863"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426934863"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426934863"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426934863"}]},"ts":"1689426934863"} 2023-07-15 13:15:34,875 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=33 2023-07-15 13:15:34,875 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=33, state=SUCCESS; OpenRegionProcedure 8844727d6df4b8d9b823e3faca5489d9, server=jenkins-hbase4.apache.org,34837,1689426926314 in 369 msec 2023-07-15 13:15:34,880 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-15 13:15:34,880 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-15 13:15:34,881 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 13:15:34,881 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-15 13:15:34,881 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8844727d6df4b8d9b823e3faca5489d9, REOPEN/MOVE in 761 msec 2023-07-15 13:15:34,881 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-15 13:15:34,881 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-15 13:15:34,882 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-15 13:15:35,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure.ProcedureSyncWait(216): waitFor pid=32 2023-07-15 13:15:35,127 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_348588243. 2023-07-15 13:15:35,127 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:35,132 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:35,133 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:35,137 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:35,137 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 13:15:35,138 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:35,145 INFO [Listener at localhost/38739] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:35,150 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:35,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=47, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:35,166 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426935166"}]},"ts":"1689426935166"} 2023-07-15 13:15:35,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-15 13:15:35,168 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-15 13:15:35,171 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-15 13:15:35,180 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea00e8f08a5ae412c1d320c570c4c46, UNASSIGN}, {pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8844727d6df4b8d9b823e3faca5489d9, UNASSIGN}, {pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0f7d07568d421cbca320503a84958086, UNASSIGN}, {pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e1569b942aa4f0e4d1bd934eb531919f, UNASSIGN}, {pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1cbc749cd0d75a714d2d346cd086208, UNASSIGN}] 2023-07-15 13:15:35,192 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1cbc749cd0d75a714d2d346cd086208, UNASSIGN 2023-07-15 13:15:35,201 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e1569b942aa4f0e4d1bd934eb531919f, UNASSIGN 2023-07-15 13:15:35,202 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8844727d6df4b8d9b823e3faca5489d9, UNASSIGN 2023-07-15 13:15:35,202 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0f7d07568d421cbca320503a84958086, UNASSIGN 2023-07-15 13:15:35,202 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=b1cbc749cd0d75a714d2d346cd086208, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:35,202 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea00e8f08a5ae412c1d320c570c4c46, UNASSIGN 2023-07-15 13:15:35,202 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426935202"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426935202"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426935202"}]},"ts":"1689426935202"} 2023-07-15 13:15:35,204 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=e1569b942aa4f0e4d1bd934eb531919f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:35,204 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=0f7d07568d421cbca320503a84958086, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:35,204 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426935204"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426935204"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426935204"}]},"ts":"1689426935204"} 2023-07-15 13:15:35,204 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=8844727d6df4b8d9b823e3faca5489d9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:35,204 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426935204"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426935204"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426935204"}]},"ts":"1689426935204"} 2023-07-15 13:15:35,204 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=aea00e8f08a5ae412c1d320c570c4c46, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:35,205 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426935204"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426935204"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426935204"}]},"ts":"1689426935204"} 2023-07-15 13:15:35,204 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689426932782.0f7d07568d421cbca320503a84958086.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426935204"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426935204"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426935204"}]},"ts":"1689426935204"} 2023-07-15 13:15:35,207 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=52, state=RUNNABLE; CloseRegionProcedure b1cbc749cd0d75a714d2d346cd086208, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:35,212 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=51, state=RUNNABLE; CloseRegionProcedure e1569b942aa4f0e4d1bd934eb531919f, server=jenkins-hbase4.apache.org,37679,1689426926099}] 2023-07-15 13:15:35,214 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=49, state=RUNNABLE; CloseRegionProcedure 8844727d6df4b8d9b823e3faca5489d9, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:35,216 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=48, state=RUNNABLE; CloseRegionProcedure aea00e8f08a5ae412c1d320c570c4c46, server=jenkins-hbase4.apache.org,37679,1689426926099}] 2023-07-15 13:15:35,219 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=50, state=RUNNABLE; CloseRegionProcedure 0f7d07568d421cbca320503a84958086, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:35,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-15 13:15:35,367 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0f7d07568d421cbca320503a84958086 2023-07-15 13:15:35,374 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0f7d07568d421cbca320503a84958086, disabling compactions & flushes 2023-07-15 13:15:35,374 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:35,374 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:35,374 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. after waiting 0 ms 2023-07-15 13:15:35,374 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:35,383 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:35,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e1569b942aa4f0e4d1bd934eb531919f, disabling compactions & flushes 2023-07-15 13:15:35,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:35,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:35,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. after waiting 0 ms 2023-07-15 13:15:35,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:35,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 13:15:35,388 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086. 2023-07-15 13:15:35,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0f7d07568d421cbca320503a84958086: 2023-07-15 13:15:35,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 13:15:35,397 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0f7d07568d421cbca320503a84958086 2023-07-15 13:15:35,397 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:35,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b1cbc749cd0d75a714d2d346cd086208, disabling compactions & flushes 2023-07-15 13:15:35,397 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:35,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:35,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. after waiting 0 ms 2023-07-15 13:15:35,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:35,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f. 2023-07-15 13:15:35,400 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=0f7d07568d421cbca320503a84958086, regionState=CLOSED 2023-07-15 13:15:35,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e1569b942aa4f0e4d1bd934eb531919f: 2023-07-15 13:15:35,401 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689426932782.0f7d07568d421cbca320503a84958086.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426935400"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426935400"}]},"ts":"1689426935400"} 2023-07-15 13:15:35,404 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:35,405 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:35,408 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=e1569b942aa4f0e4d1bd934eb531919f, regionState=CLOSED 2023-07-15 13:15:35,408 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426935408"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426935408"}]},"ts":"1689426935408"} 2023-07-15 13:15:35,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 13:15:35,410 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=50 2023-07-15 13:15:35,410 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=50, state=SUCCESS; CloseRegionProcedure 0f7d07568d421cbca320503a84958086, server=jenkins-hbase4.apache.org,34837,1689426926314 in 184 msec 2023-07-15 13:15:35,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing aea00e8f08a5ae412c1d320c570c4c46, disabling compactions & flushes 2023-07-15 13:15:35,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:35,411 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:35,411 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. after waiting 0 ms 2023-07-15 13:15:35,411 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:35,423 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208. 2023-07-15 13:15:35,423 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b1cbc749cd0d75a714d2d346cd086208: 2023-07-15 13:15:35,425 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0f7d07568d421cbca320503a84958086, UNASSIGN in 234 msec 2023-07-15 13:15:35,427 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=51 2023-07-15 13:15:35,427 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=51, state=SUCCESS; CloseRegionProcedure e1569b942aa4f0e4d1bd934eb531919f, server=jenkins-hbase4.apache.org,37679,1689426926099 in 199 msec 2023-07-15 13:15:35,427 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:35,427 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:35,429 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8844727d6df4b8d9b823e3faca5489d9, disabling compactions & flushes 2023-07-15 13:15:35,429 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:35,429 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:35,429 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. after waiting 0 ms 2023-07-15 13:15:35,429 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:35,429 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=b1cbc749cd0d75a714d2d346cd086208, regionState=CLOSED 2023-07-15 13:15:35,429 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426935429"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426935429"}]},"ts":"1689426935429"} 2023-07-15 13:15:35,431 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e1569b942aa4f0e4d1bd934eb531919f, UNASSIGN in 251 msec 2023-07-15 13:15:35,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 13:15:35,436 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46. 2023-07-15 13:15:35,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for aea00e8f08a5ae412c1d320c570c4c46: 2023-07-15 13:15:35,436 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=52 2023-07-15 13:15:35,436 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=52, state=SUCCESS; CloseRegionProcedure b1cbc749cd0d75a714d2d346cd086208, server=jenkins-hbase4.apache.org,34837,1689426926314 in 224 msec 2023-07-15 13:15:35,441 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:35,442 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b1cbc749cd0d75a714d2d346cd086208, UNASSIGN in 256 msec 2023-07-15 13:15:35,443 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=aea00e8f08a5ae412c1d320c570c4c46, regionState=CLOSED 2023-07-15 13:15:35,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 13:15:35,443 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426935442"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426935442"}]},"ts":"1689426935442"} 2023-07-15 13:15:35,445 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9. 2023-07-15 13:15:35,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8844727d6df4b8d9b823e3faca5489d9: 2023-07-15 13:15:35,451 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:35,452 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=8844727d6df4b8d9b823e3faca5489d9, regionState=CLOSED 2023-07-15 13:15:35,452 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426935452"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426935452"}]},"ts":"1689426935452"} 2023-07-15 13:15:35,455 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=48 2023-07-15 13:15:35,455 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=48, state=SUCCESS; CloseRegionProcedure aea00e8f08a5ae412c1d320c570c4c46, server=jenkins-hbase4.apache.org,37679,1689426926099 in 230 msec 2023-07-15 13:15:35,457 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=aea00e8f08a5ae412c1d320c570c4c46, UNASSIGN in 279 msec 2023-07-15 13:15:35,458 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=49 2023-07-15 13:15:35,458 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=49, state=SUCCESS; CloseRegionProcedure 8844727d6df4b8d9b823e3faca5489d9, server=jenkins-hbase4.apache.org,34837,1689426926314 in 241 msec 2023-07-15 13:15:35,464 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=47 2023-07-15 13:15:35,464 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8844727d6df4b8d9b823e3faca5489d9, UNASSIGN in 282 msec 2023-07-15 13:15:35,469 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426935468"}]},"ts":"1689426935468"} 2023-07-15 13:15:35,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-15 13:15:35,471 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-15 13:15:35,473 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-15 13:15:35,478 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=47, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 322 msec 2023-07-15 13:15:35,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-15 13:15:35,773 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 47 completed 2023-07-15 13:15:35,774 INFO [Listener at localhost/38739] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:35,779 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:35,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=58, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-15 13:15:35,792 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-15 13:15:35,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-15 13:15:35,806 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:35,806 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:35,806 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:35,806 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:35,806 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086 2023-07-15 13:15:35,810 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9/recovered.edits] 2023-07-15 13:15:35,810 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208/recovered.edits] 2023-07-15 13:15:35,810 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f/recovered.edits] 2023-07-15 13:15:35,810 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086/recovered.edits] 2023-07-15 13:15:35,814 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46/recovered.edits] 2023-07-15 13:15:35,832 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9/recovered.edits/7.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9/recovered.edits/7.seqid 2023-07-15 13:15:35,832 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46/recovered.edits/7.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46/recovered.edits/7.seqid 2023-07-15 13:15:35,835 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f/recovered.edits/7.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f/recovered.edits/7.seqid 2023-07-15 13:15:35,835 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8844727d6df4b8d9b823e3faca5489d9 2023-07-15 13:15:35,835 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086/recovered.edits/7.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086/recovered.edits/7.seqid 2023-07-15 13:15:35,835 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208/recovered.edits/7.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208/recovered.edits/7.seqid 2023-07-15 13:15:35,835 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/aea00e8f08a5ae412c1d320c570c4c46 2023-07-15 13:15:35,836 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b1cbc749cd0d75a714d2d346cd086208 2023-07-15 13:15:35,837 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0f7d07568d421cbca320503a84958086 2023-07-15 13:15:35,837 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e1569b942aa4f0e4d1bd934eb531919f 2023-07-15 13:15:35,837 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-15 13:15:35,879 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-15 13:15:35,885 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-15 13:15:35,885 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-15 13:15:35,886 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426935885"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:35,886 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426935885"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:35,886 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689426932782.0f7d07568d421cbca320503a84958086.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426935885"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:35,886 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426935885"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:35,886 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426935885"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:35,888 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-15 13:15:35,889 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => aea00e8f08a5ae412c1d320c570c4c46, NAME => 'Group_testTableMoveTruncateAndDrop,,1689426932782.aea00e8f08a5ae412c1d320c570c4c46.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 8844727d6df4b8d9b823e3faca5489d9, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689426932782.8844727d6df4b8d9b823e3faca5489d9.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 0f7d07568d421cbca320503a84958086, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426932782.0f7d07568d421cbca320503a84958086.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => e1569b942aa4f0e4d1bd934eb531919f, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426932782.e1569b942aa4f0e4d1bd934eb531919f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => b1cbc749cd0d75a714d2d346cd086208, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689426932782.b1cbc749cd0d75a714d2d346cd086208.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-15 13:15:35,889 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-15 13:15:35,889 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689426935889"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:35,891 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-15 13:15:35,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-15 13:15:35,900 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6e9779090e8d7e17e86f98b5d48d7f02 2023-07-15 13:15:35,900 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/268668eabc4ad70d18b3d08a94025adb 2023-07-15 13:15:35,900 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c99428cbc0322a583a670eface18299 2023-07-15 13:15:35,900 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1f25794acc157d14397cd229198162ea 2023-07-15 13:15:35,900 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/08aa30773eb7e0fad7c7048380781f5e 2023-07-15 13:15:35,901 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6e9779090e8d7e17e86f98b5d48d7f02 empty. 2023-07-15 13:15:35,901 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1f25794acc157d14397cd229198162ea empty. 2023-07-15 13:15:35,901 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c99428cbc0322a583a670eface18299 empty. 2023-07-15 13:15:35,901 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/08aa30773eb7e0fad7c7048380781f5e empty. 2023-07-15 13:15:35,901 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/268668eabc4ad70d18b3d08a94025adb empty. 2023-07-15 13:15:35,902 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6e9779090e8d7e17e86f98b5d48d7f02 2023-07-15 13:15:35,902 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c99428cbc0322a583a670eface18299 2023-07-15 13:15:35,902 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1f25794acc157d14397cd229198162ea 2023-07-15 13:15:35,902 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/268668eabc4ad70d18b3d08a94025adb 2023-07-15 13:15:35,902 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/08aa30773eb7e0fad7c7048380781f5e 2023-07-15 13:15:35,902 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-15 13:15:35,924 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:35,926 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 08aa30773eb7e0fad7c7048380781f5e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:35,926 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6e9779090e8d7e17e86f98b5d48d7f02, NAME => 'Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:35,930 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 1f25794acc157d14397cd229198162ea, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:35,960 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:35,960 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 08aa30773eb7e0fad7c7048380781f5e, disabling compactions & flushes 2023-07-15 13:15:35,960 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e. 2023-07-15 13:15:35,960 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e. 2023-07-15 13:15:35,960 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e. after waiting 0 ms 2023-07-15 13:15:35,960 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e. 2023-07-15 13:15:35,961 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e. 2023-07-15 13:15:35,961 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 08aa30773eb7e0fad7c7048380781f5e: 2023-07-15 13:15:35,961 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 3c99428cbc0322a583a670eface18299, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:35,963 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:35,963 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:35,963 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 1f25794acc157d14397cd229198162ea, disabling compactions & flushes 2023-07-15 13:15:35,964 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 6e9779090e8d7e17e86f98b5d48d7f02, disabling compactions & flushes 2023-07-15 13:15:35,964 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea. 2023-07-15 13:15:35,964 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02. 2023-07-15 13:15:35,964 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02. 2023-07-15 13:15:35,964 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea. 2023-07-15 13:15:35,964 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02. after waiting 0 ms 2023-07-15 13:15:35,964 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea. after waiting 0 ms 2023-07-15 13:15:35,964 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02. 2023-07-15 13:15:35,964 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea. 2023-07-15 13:15:35,964 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02. 2023-07-15 13:15:35,964 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea. 2023-07-15 13:15:35,964 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 1f25794acc157d14397cd229198162ea: 2023-07-15 13:15:35,964 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 6e9779090e8d7e17e86f98b5d48d7f02: 2023-07-15 13:15:35,965 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 268668eabc4ad70d18b3d08a94025adb, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:35,998 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:35,998 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 3c99428cbc0322a583a670eface18299, disabling compactions & flushes 2023-07-15 13:15:35,998 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299. 2023-07-15 13:15:35,998 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299. 2023-07-15 13:15:35,998 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299. after waiting 0 ms 2023-07-15 13:15:35,998 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299. 2023-07-15 13:15:35,998 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299. 2023-07-15 13:15:35,998 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 3c99428cbc0322a583a670eface18299: 2023-07-15 13:15:36,005 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:36,005 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 268668eabc4ad70d18b3d08a94025adb, disabling compactions & flushes 2023-07-15 13:15:36,005 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb. 2023-07-15 13:15:36,006 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb. 2023-07-15 13:15:36,006 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb. after waiting 0 ms 2023-07-15 13:15:36,006 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb. 2023-07-15 13:15:36,006 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb. 2023-07-15 13:15:36,006 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 268668eabc4ad70d18b3d08a94025adb: 2023-07-15 13:15:36,011 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426936011"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426936011"}]},"ts":"1689426936011"} 2023-07-15 13:15:36,011 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689426935841.1f25794acc157d14397cd229198162ea.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426936011"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426936011"}]},"ts":"1689426936011"} 2023-07-15 13:15:36,011 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426936011"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426936011"}]},"ts":"1689426936011"} 2023-07-15 13:15:36,011 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689426935841.3c99428cbc0322a583a670eface18299.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426936011"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426936011"}]},"ts":"1689426936011"} 2023-07-15 13:15:36,011 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426936011"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426936011"}]},"ts":"1689426936011"} 2023-07-15 13:15:36,018 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-15 13:15:36,019 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426936019"}]},"ts":"1689426936019"} 2023-07-15 13:15:36,021 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-15 13:15:36,026 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:36,026 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:36,026 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:36,026 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:36,026 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e9779090e8d7e17e86f98b5d48d7f02, ASSIGN}, {pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08aa30773eb7e0fad7c7048380781f5e, ASSIGN}, {pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1f25794acc157d14397cd229198162ea, ASSIGN}, {pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c99428cbc0322a583a670eface18299, ASSIGN}, {pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=268668eabc4ad70d18b3d08a94025adb, ASSIGN}] 2023-07-15 13:15:36,029 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08aa30773eb7e0fad7c7048380781f5e, ASSIGN 2023-07-15 13:15:36,029 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1f25794acc157d14397cd229198162ea, ASSIGN 2023-07-15 13:15:36,029 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c99428cbc0322a583a670eface18299, ASSIGN 2023-07-15 13:15:36,030 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e9779090e8d7e17e86f98b5d48d7f02, ASSIGN 2023-07-15 13:15:36,030 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=268668eabc4ad70d18b3d08a94025adb, ASSIGN 2023-07-15 13:15:36,031 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08aa30773eb7e0fad7c7048380781f5e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34837,1689426926314; forceNewPlan=false, retain=false 2023-07-15 13:15:36,031 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c99428cbc0322a583a670eface18299, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37679,1689426926099; forceNewPlan=false, retain=false 2023-07-15 13:15:36,031 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1f25794acc157d14397cd229198162ea, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37679,1689426926099; forceNewPlan=false, retain=false 2023-07-15 13:15:36,031 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=268668eabc4ad70d18b3d08a94025adb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34837,1689426926314; forceNewPlan=false, retain=false 2023-07-15 13:15:36,032 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e9779090e8d7e17e86f98b5d48d7f02, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34837,1689426926314; forceNewPlan=false, retain=false 2023-07-15 13:15:36,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-15 13:15:36,181 INFO [jenkins-hbase4:40693] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-15 13:15:36,184 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=3c99428cbc0322a583a670eface18299, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:36,184 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=08aa30773eb7e0fad7c7048380781f5e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:36,184 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=268668eabc4ad70d18b3d08a94025adb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:36,184 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=1f25794acc157d14397cd229198162ea, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:36,185 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426936184"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426936184"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426936184"}]},"ts":"1689426936184"} 2023-07-15 13:15:36,184 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=6e9779090e8d7e17e86f98b5d48d7f02, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:36,185 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689426935841.1f25794acc157d14397cd229198162ea.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426936184"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426936184"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426936184"}]},"ts":"1689426936184"} 2023-07-15 13:15:36,185 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426936184"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426936184"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426936184"}]},"ts":"1689426936184"} 2023-07-15 13:15:36,185 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426936184"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426936184"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426936184"}]},"ts":"1689426936184"} 2023-07-15 13:15:36,185 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689426935841.3c99428cbc0322a583a670eface18299.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426936184"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426936184"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426936184"}]},"ts":"1689426936184"} 2023-07-15 13:15:36,187 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE; OpenRegionProcedure 268668eabc4ad70d18b3d08a94025adb, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:36,188 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=61, state=RUNNABLE; OpenRegionProcedure 1f25794acc157d14397cd229198162ea, server=jenkins-hbase4.apache.org,37679,1689426926099}] 2023-07-15 13:15:36,191 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=59, state=RUNNABLE; OpenRegionProcedure 6e9779090e8d7e17e86f98b5d48d7f02, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:36,194 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=60, state=RUNNABLE; OpenRegionProcedure 08aa30773eb7e0fad7c7048380781f5e, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:36,195 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=62, state=RUNNABLE; OpenRegionProcedure 3c99428cbc0322a583a670eface18299, server=jenkins-hbase4.apache.org,37679,1689426926099}] 2023-07-15 13:15:36,345 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02. 2023-07-15 13:15:36,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6e9779090e8d7e17e86f98b5d48d7f02, NAME => 'Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-15 13:15:36,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 6e9779090e8d7e17e86f98b5d48d7f02 2023-07-15 13:15:36,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:36,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6e9779090e8d7e17e86f98b5d48d7f02 2023-07-15 13:15:36,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6e9779090e8d7e17e86f98b5d48d7f02 2023-07-15 13:15:36,347 INFO [StoreOpener-6e9779090e8d7e17e86f98b5d48d7f02-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6e9779090e8d7e17e86f98b5d48d7f02 2023-07-15 13:15:36,349 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299. 2023-07-15 13:15:36,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3c99428cbc0322a583a670eface18299, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-15 13:15:36,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3c99428cbc0322a583a670eface18299 2023-07-15 13:15:36,349 DEBUG [StoreOpener-6e9779090e8d7e17e86f98b5d48d7f02-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/6e9779090e8d7e17e86f98b5d48d7f02/f 2023-07-15 13:15:36,349 DEBUG [StoreOpener-6e9779090e8d7e17e86f98b5d48d7f02-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/6e9779090e8d7e17e86f98b5d48d7f02/f 2023-07-15 13:15:36,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:36,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3c99428cbc0322a583a670eface18299 2023-07-15 13:15:36,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3c99428cbc0322a583a670eface18299 2023-07-15 13:15:36,350 INFO [StoreOpener-6e9779090e8d7e17e86f98b5d48d7f02-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6e9779090e8d7e17e86f98b5d48d7f02 columnFamilyName f 2023-07-15 13:15:36,350 INFO [StoreOpener-6e9779090e8d7e17e86f98b5d48d7f02-1] regionserver.HStore(310): Store=6e9779090e8d7e17e86f98b5d48d7f02/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:36,351 INFO [StoreOpener-3c99428cbc0322a583a670eface18299-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3c99428cbc0322a583a670eface18299 2023-07-15 13:15:36,351 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/6e9779090e8d7e17e86f98b5d48d7f02 2023-07-15 13:15:36,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/6e9779090e8d7e17e86f98b5d48d7f02 2023-07-15 13:15:36,352 DEBUG [StoreOpener-3c99428cbc0322a583a670eface18299-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/3c99428cbc0322a583a670eface18299/f 2023-07-15 13:15:36,352 DEBUG [StoreOpener-3c99428cbc0322a583a670eface18299-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/3c99428cbc0322a583a670eface18299/f 2023-07-15 13:15:36,353 INFO [StoreOpener-3c99428cbc0322a583a670eface18299-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3c99428cbc0322a583a670eface18299 columnFamilyName f 2023-07-15 13:15:36,353 INFO [StoreOpener-3c99428cbc0322a583a670eface18299-1] regionserver.HStore(310): Store=3c99428cbc0322a583a670eface18299/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:36,354 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/3c99428cbc0322a583a670eface18299 2023-07-15 13:15:36,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/3c99428cbc0322a583a670eface18299 2023-07-15 13:15:36,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6e9779090e8d7e17e86f98b5d48d7f02 2023-07-15 13:15:36,358 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3c99428cbc0322a583a670eface18299 2023-07-15 13:15:36,358 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/6e9779090e8d7e17e86f98b5d48d7f02/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:36,359 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6e9779090e8d7e17e86f98b5d48d7f02; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10819600320, jitterRate=0.0076538026332855225}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:36,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6e9779090e8d7e17e86f98b5d48d7f02: 2023-07-15 13:15:36,361 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02., pid=66, masterSystemTime=1689426936341 2023-07-15 13:15:36,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/3c99428cbc0322a583a670eface18299/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:36,362 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3c99428cbc0322a583a670eface18299; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11668852640, jitterRate=0.08674658834934235}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:36,362 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3c99428cbc0322a583a670eface18299: 2023-07-15 13:15:36,363 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299., pid=68, masterSystemTime=1689426936345 2023-07-15 13:15:36,363 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02. 2023-07-15 13:15:36,363 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02. 2023-07-15 13:15:36,363 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb. 2023-07-15 13:15:36,363 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 268668eabc4ad70d18b3d08a94025adb, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-15 13:15:36,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 268668eabc4ad70d18b3d08a94025adb 2023-07-15 13:15:36,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:36,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 268668eabc4ad70d18b3d08a94025adb 2023-07-15 13:15:36,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 268668eabc4ad70d18b3d08a94025adb 2023-07-15 13:15:36,364 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=6e9779090e8d7e17e86f98b5d48d7f02, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:36,364 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426936364"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426936364"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426936364"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426936364"}]},"ts":"1689426936364"} 2023-07-15 13:15:36,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299. 2023-07-15 13:15:36,365 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299. 2023-07-15 13:15:36,365 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea. 2023-07-15 13:15:36,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1f25794acc157d14397cd229198162ea, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-15 13:15:36,366 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1f25794acc157d14397cd229198162ea 2023-07-15 13:15:36,366 INFO [StoreOpener-268668eabc4ad70d18b3d08a94025adb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 268668eabc4ad70d18b3d08a94025adb 2023-07-15 13:15:36,366 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:36,366 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1f25794acc157d14397cd229198162ea 2023-07-15 13:15:36,366 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1f25794acc157d14397cd229198162ea 2023-07-15 13:15:36,367 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=3c99428cbc0322a583a670eface18299, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:36,367 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689426935841.3c99428cbc0322a583a670eface18299.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426936367"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426936367"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426936367"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426936367"}]},"ts":"1689426936367"} 2023-07-15 13:15:36,369 DEBUG [StoreOpener-268668eabc4ad70d18b3d08a94025adb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/268668eabc4ad70d18b3d08a94025adb/f 2023-07-15 13:15:36,369 DEBUG [StoreOpener-268668eabc4ad70d18b3d08a94025adb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/268668eabc4ad70d18b3d08a94025adb/f 2023-07-15 13:15:36,370 INFO [StoreOpener-268668eabc4ad70d18b3d08a94025adb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 268668eabc4ad70d18b3d08a94025adb columnFamilyName f 2023-07-15 13:15:36,370 INFO [StoreOpener-268668eabc4ad70d18b3d08a94025adb-1] regionserver.HStore(310): Store=268668eabc4ad70d18b3d08a94025adb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:36,371 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=59 2023-07-15 13:15:36,371 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=59, state=SUCCESS; OpenRegionProcedure 6e9779090e8d7e17e86f98b5d48d7f02, server=jenkins-hbase4.apache.org,34837,1689426926314 in 175 msec 2023-07-15 13:15:36,371 INFO [StoreOpener-1f25794acc157d14397cd229198162ea-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1f25794acc157d14397cd229198162ea 2023-07-15 13:15:36,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/268668eabc4ad70d18b3d08a94025adb 2023-07-15 13:15:36,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/268668eabc4ad70d18b3d08a94025adb 2023-07-15 13:15:36,373 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=62 2023-07-15 13:15:36,373 DEBUG [StoreOpener-1f25794acc157d14397cd229198162ea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/1f25794acc157d14397cd229198162ea/f 2023-07-15 13:15:36,374 DEBUG [StoreOpener-1f25794acc157d14397cd229198162ea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/1f25794acc157d14397cd229198162ea/f 2023-07-15 13:15:36,374 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e9779090e8d7e17e86f98b5d48d7f02, ASSIGN in 345 msec 2023-07-15 13:15:36,374 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=62, state=SUCCESS; OpenRegionProcedure 3c99428cbc0322a583a670eface18299, server=jenkins-hbase4.apache.org,37679,1689426926099 in 174 msec 2023-07-15 13:15:36,374 INFO [StoreOpener-1f25794acc157d14397cd229198162ea-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1f25794acc157d14397cd229198162ea columnFamilyName f 2023-07-15 13:15:36,375 INFO [StoreOpener-1f25794acc157d14397cd229198162ea-1] regionserver.HStore(310): Store=1f25794acc157d14397cd229198162ea/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:36,375 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c99428cbc0322a583a670eface18299, ASSIGN in 347 msec 2023-07-15 13:15:36,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/1f25794acc157d14397cd229198162ea 2023-07-15 13:15:36,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 268668eabc4ad70d18b3d08a94025adb 2023-07-15 13:15:36,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/1f25794acc157d14397cd229198162ea 2023-07-15 13:15:36,378 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/268668eabc4ad70d18b3d08a94025adb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:36,379 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 268668eabc4ad70d18b3d08a94025adb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10028800960, jitterRate=-0.06599512696266174}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:36,379 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 268668eabc4ad70d18b3d08a94025adb: 2023-07-15 13:15:36,379 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1f25794acc157d14397cd229198162ea 2023-07-15 13:15:36,380 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb., pid=64, masterSystemTime=1689426936341 2023-07-15 13:15:36,382 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb. 2023-07-15 13:15:36,382 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb. 2023-07-15 13:15:36,382 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/1f25794acc157d14397cd229198162ea/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:36,382 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e. 2023-07-15 13:15:36,382 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 08aa30773eb7e0fad7c7048380781f5e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-15 13:15:36,383 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=268668eabc4ad70d18b3d08a94025adb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:36,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 08aa30773eb7e0fad7c7048380781f5e 2023-07-15 13:15:36,383 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426936382"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426936382"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426936382"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426936382"}]},"ts":"1689426936382"} 2023-07-15 13:15:36,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:36,383 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1f25794acc157d14397cd229198162ea; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11349442080, jitterRate=0.05699916183948517}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:36,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 08aa30773eb7e0fad7c7048380781f5e 2023-07-15 13:15:36,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1f25794acc157d14397cd229198162ea: 2023-07-15 13:15:36,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 08aa30773eb7e0fad7c7048380781f5e 2023-07-15 13:15:36,384 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea., pid=65, masterSystemTime=1689426936345 2023-07-15 13:15:36,385 INFO [StoreOpener-08aa30773eb7e0fad7c7048380781f5e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 08aa30773eb7e0fad7c7048380781f5e 2023-07-15 13:15:36,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea. 2023-07-15 13:15:36,386 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea. 2023-07-15 13:15:36,387 DEBUG [StoreOpener-08aa30773eb7e0fad7c7048380781f5e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/08aa30773eb7e0fad7c7048380781f5e/f 2023-07-15 13:15:36,387 DEBUG [StoreOpener-08aa30773eb7e0fad7c7048380781f5e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/08aa30773eb7e0fad7c7048380781f5e/f 2023-07-15 13:15:36,387 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=1f25794acc157d14397cd229198162ea, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:36,387 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689426935841.1f25794acc157d14397cd229198162ea.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426936387"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426936387"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426936387"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426936387"}]},"ts":"1689426936387"} 2023-07-15 13:15:36,387 INFO [StoreOpener-08aa30773eb7e0fad7c7048380781f5e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 08aa30773eb7e0fad7c7048380781f5e columnFamilyName f 2023-07-15 13:15:36,388 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=63 2023-07-15 13:15:36,388 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; OpenRegionProcedure 268668eabc4ad70d18b3d08a94025adb, server=jenkins-hbase4.apache.org,34837,1689426926314 in 197 msec 2023-07-15 13:15:36,388 INFO [StoreOpener-08aa30773eb7e0fad7c7048380781f5e-1] regionserver.HStore(310): Store=08aa30773eb7e0fad7c7048380781f5e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:36,390 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/08aa30773eb7e0fad7c7048380781f5e 2023-07-15 13:15:36,390 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=268668eabc4ad70d18b3d08a94025adb, ASSIGN in 362 msec 2023-07-15 13:15:36,390 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/08aa30773eb7e0fad7c7048380781f5e 2023-07-15 13:15:36,392 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=61 2023-07-15 13:15:36,392 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=61, state=SUCCESS; OpenRegionProcedure 1f25794acc157d14397cd229198162ea, server=jenkins-hbase4.apache.org,37679,1689426926099 in 201 msec 2023-07-15 13:15:36,393 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1f25794acc157d14397cd229198162ea, ASSIGN in 366 msec 2023-07-15 13:15:36,394 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 08aa30773eb7e0fad7c7048380781f5e 2023-07-15 13:15:36,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/08aa30773eb7e0fad7c7048380781f5e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:36,397 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 08aa30773eb7e0fad7c7048380781f5e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10050432000, jitterRate=-0.0639805793762207}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:36,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 08aa30773eb7e0fad7c7048380781f5e: 2023-07-15 13:15:36,398 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e., pid=67, masterSystemTime=1689426936341 2023-07-15 13:15:36,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-15 13:15:36,400 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e. 2023-07-15 13:15:36,400 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e. 2023-07-15 13:15:36,400 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=08aa30773eb7e0fad7c7048380781f5e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:36,401 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426936400"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426936400"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426936400"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426936400"}]},"ts":"1689426936400"} 2023-07-15 13:15:36,404 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=60 2023-07-15 13:15:36,405 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=60, state=SUCCESS; OpenRegionProcedure 08aa30773eb7e0fad7c7048380781f5e, server=jenkins-hbase4.apache.org,34837,1689426926314 in 210 msec 2023-07-15 13:15:36,406 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=58 2023-07-15 13:15:36,406 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08aa30773eb7e0fad7c7048380781f5e, ASSIGN in 379 msec 2023-07-15 13:15:36,406 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426936406"}]},"ts":"1689426936406"} 2023-07-15 13:15:36,408 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-15 13:15:36,410 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-15 13:15:36,411 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=58, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 629 msec 2023-07-15 13:15:36,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-15 13:15:36,900 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 58 completed 2023-07-15 13:15:36,901 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:36,902 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:36,903 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:36,903 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:36,904 INFO [Listener at localhost/38739] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:36,905 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:36,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=69, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:36,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-15 13:15:36,911 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426936911"}]},"ts":"1689426936911"} 2023-07-15 13:15:36,912 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-15 13:15:36,915 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-15 13:15:36,916 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e9779090e8d7e17e86f98b5d48d7f02, UNASSIGN}, {pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08aa30773eb7e0fad7c7048380781f5e, UNASSIGN}, {pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1f25794acc157d14397cd229198162ea, UNASSIGN}, {pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c99428cbc0322a583a670eface18299, UNASSIGN}, {pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=268668eabc4ad70d18b3d08a94025adb, UNASSIGN}] 2023-07-15 13:15:36,919 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1f25794acc157d14397cd229198162ea, UNASSIGN 2023-07-15 13:15:36,919 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e9779090e8d7e17e86f98b5d48d7f02, UNASSIGN 2023-07-15 13:15:36,919 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08aa30773eb7e0fad7c7048380781f5e, UNASSIGN 2023-07-15 13:15:36,919 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c99428cbc0322a583a670eface18299, UNASSIGN 2023-07-15 13:15:36,920 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=268668eabc4ad70d18b3d08a94025adb, UNASSIGN 2023-07-15 13:15:36,920 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=1f25794acc157d14397cd229198162ea, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:36,920 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=08aa30773eb7e0fad7c7048380781f5e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:36,921 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689426935841.1f25794acc157d14397cd229198162ea.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426936920"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426936920"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426936920"}]},"ts":"1689426936920"} 2023-07-15 13:15:36,921 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426936920"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426936920"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426936920"}]},"ts":"1689426936920"} 2023-07-15 13:15:36,921 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=3c99428cbc0322a583a670eface18299, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:36,921 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=6e9779090e8d7e17e86f98b5d48d7f02, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:36,921 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689426935841.3c99428cbc0322a583a670eface18299.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426936921"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426936921"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426936921"}]},"ts":"1689426936921"} 2023-07-15 13:15:36,921 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426936921"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426936921"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426936921"}]},"ts":"1689426936921"} 2023-07-15 13:15:36,922 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=268668eabc4ad70d18b3d08a94025adb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:36,923 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426936922"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426936922"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426936922"}]},"ts":"1689426936922"} 2023-07-15 13:15:36,923 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=72, state=RUNNABLE; CloseRegionProcedure 1f25794acc157d14397cd229198162ea, server=jenkins-hbase4.apache.org,37679,1689426926099}] 2023-07-15 13:15:36,925 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=71, state=RUNNABLE; CloseRegionProcedure 08aa30773eb7e0fad7c7048380781f5e, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:36,926 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=73, state=RUNNABLE; CloseRegionProcedure 3c99428cbc0322a583a670eface18299, server=jenkins-hbase4.apache.org,37679,1689426926099}] 2023-07-15 13:15:36,928 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=70, state=RUNNABLE; CloseRegionProcedure 6e9779090e8d7e17e86f98b5d48d7f02, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:36,929 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=74, state=RUNNABLE; CloseRegionProcedure 268668eabc4ad70d18b3d08a94025adb, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:37,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-15 13:15:37,077 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1f25794acc157d14397cd229198162ea 2023-07-15 13:15:37,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1f25794acc157d14397cd229198162ea, disabling compactions & flushes 2023-07-15 13:15:37,079 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea. 2023-07-15 13:15:37,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea. 2023-07-15 13:15:37,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea. after waiting 0 ms 2023-07-15 13:15:37,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea. 2023-07-15 13:15:37,082 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 268668eabc4ad70d18b3d08a94025adb 2023-07-15 13:15:37,083 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 268668eabc4ad70d18b3d08a94025adb, disabling compactions & flushes 2023-07-15 13:15:37,083 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb. 2023-07-15 13:15:37,083 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb. 2023-07-15 13:15:37,083 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb. after waiting 0 ms 2023-07-15 13:15:37,083 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb. 2023-07-15 13:15:37,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/1f25794acc157d14397cd229198162ea/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:37,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/268668eabc4ad70d18b3d08a94025adb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:37,089 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea. 2023-07-15 13:15:37,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1f25794acc157d14397cd229198162ea: 2023-07-15 13:15:37,090 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb. 2023-07-15 13:15:37,090 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 268668eabc4ad70d18b3d08a94025adb: 2023-07-15 13:15:37,093 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1f25794acc157d14397cd229198162ea 2023-07-15 13:15:37,093 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3c99428cbc0322a583a670eface18299 2023-07-15 13:15:37,094 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3c99428cbc0322a583a670eface18299, disabling compactions & flushes 2023-07-15 13:15:37,094 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299. 2023-07-15 13:15:37,094 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299. 2023-07-15 13:15:37,094 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299. after waiting 0 ms 2023-07-15 13:15:37,094 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299. 2023-07-15 13:15:37,104 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=1f25794acc157d14397cd229198162ea, regionState=CLOSED 2023-07-15 13:15:37,105 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689426935841.1f25794acc157d14397cd229198162ea.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426937104"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426937104"}]},"ts":"1689426937104"} 2023-07-15 13:15:37,105 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 268668eabc4ad70d18b3d08a94025adb 2023-07-15 13:15:37,105 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 08aa30773eb7e0fad7c7048380781f5e 2023-07-15 13:15:37,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 08aa30773eb7e0fad7c7048380781f5e, disabling compactions & flushes 2023-07-15 13:15:37,106 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e. 2023-07-15 13:15:37,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e. 2023-07-15 13:15:37,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e. after waiting 0 ms 2023-07-15 13:15:37,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e. 2023-07-15 13:15:37,108 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=268668eabc4ad70d18b3d08a94025adb, regionState=CLOSED 2023-07-15 13:15:37,110 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426937108"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426937108"}]},"ts":"1689426937108"} 2023-07-15 13:15:37,111 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/3c99428cbc0322a583a670eface18299/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:37,112 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299. 2023-07-15 13:15:37,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3c99428cbc0322a583a670eface18299: 2023-07-15 13:15:37,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/08aa30773eb7e0fad7c7048380781f5e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:37,112 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e. 2023-07-15 13:15:37,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 08aa30773eb7e0fad7c7048380781f5e: 2023-07-15 13:15:37,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3c99428cbc0322a583a670eface18299 2023-07-15 13:15:37,115 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=72 2023-07-15 13:15:37,115 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=3c99428cbc0322a583a670eface18299, regionState=CLOSED 2023-07-15 13:15:37,115 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=72, state=SUCCESS; CloseRegionProcedure 1f25794acc157d14397cd229198162ea, server=jenkins-hbase4.apache.org,37679,1689426926099 in 186 msec 2023-07-15 13:15:37,115 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689426935841.3c99428cbc0322a583a670eface18299.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426937115"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426937115"}]},"ts":"1689426937115"} 2023-07-15 13:15:37,115 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 08aa30773eb7e0fad7c7048380781f5e 2023-07-15 13:15:37,115 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6e9779090e8d7e17e86f98b5d48d7f02 2023-07-15 13:15:37,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6e9779090e8d7e17e86f98b5d48d7f02, disabling compactions & flushes 2023-07-15 13:15:37,116 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02. 2023-07-15 13:15:37,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02. 2023-07-15 13:15:37,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02. after waiting 0 ms 2023-07-15 13:15:37,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02. 2023-07-15 13:15:37,117 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=74 2023-07-15 13:15:37,117 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=74, state=SUCCESS; CloseRegionProcedure 268668eabc4ad70d18b3d08a94025adb, server=jenkins-hbase4.apache.org,34837,1689426926314 in 183 msec 2023-07-15 13:15:37,117 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=08aa30773eb7e0fad7c7048380781f5e, regionState=CLOSED 2023-07-15 13:15:37,117 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689426937117"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426937117"}]},"ts":"1689426937117"} 2023-07-15 13:15:37,118 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1f25794acc157d14397cd229198162ea, UNASSIGN in 199 msec 2023-07-15 13:15:37,120 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=268668eabc4ad70d18b3d08a94025adb, UNASSIGN in 201 msec 2023-07-15 13:15:37,124 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=73 2023-07-15 13:15:37,124 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=73, state=SUCCESS; CloseRegionProcedure 3c99428cbc0322a583a670eface18299, server=jenkins-hbase4.apache.org,37679,1689426926099 in 192 msec 2023-07-15 13:15:37,125 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=71 2023-07-15 13:15:37,125 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=71, state=SUCCESS; CloseRegionProcedure 08aa30773eb7e0fad7c7048380781f5e, server=jenkins-hbase4.apache.org,34837,1689426926314 in 194 msec 2023-07-15 13:15:37,126 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3c99428cbc0322a583a670eface18299, UNASSIGN in 208 msec 2023-07-15 13:15:37,127 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08aa30773eb7e0fad7c7048380781f5e, UNASSIGN in 209 msec 2023-07-15 13:15:37,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testTableMoveTruncateAndDrop/6e9779090e8d7e17e86f98b5d48d7f02/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:37,128 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02. 2023-07-15 13:15:37,128 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6e9779090e8d7e17e86f98b5d48d7f02: 2023-07-15 13:15:37,129 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6e9779090e8d7e17e86f98b5d48d7f02 2023-07-15 13:15:37,130 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=6e9779090e8d7e17e86f98b5d48d7f02, regionState=CLOSED 2023-07-15 13:15:37,131 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689426937130"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426937130"}]},"ts":"1689426937130"} 2023-07-15 13:15:37,135 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=70 2023-07-15 13:15:37,135 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=70, state=SUCCESS; CloseRegionProcedure 6e9779090e8d7e17e86f98b5d48d7f02, server=jenkins-hbase4.apache.org,34837,1689426926314 in 205 msec 2023-07-15 13:15:37,137 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=69 2023-07-15 13:15:37,137 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6e9779090e8d7e17e86f98b5d48d7f02, UNASSIGN in 219 msec 2023-07-15 13:15:37,138 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426937138"}]},"ts":"1689426937138"} 2023-07-15 13:15:37,139 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-15 13:15:37,141 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-15 13:15:37,143 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=69, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 237 msec 2023-07-15 13:15:37,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-15 13:15:37,214 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 69 completed 2023-07-15 13:15:37,221 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:37,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:37,232 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:37,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_348588243' 2023-07-15 13:15:37,234 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=80, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:37,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:37,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:37,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:37,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:37,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-15 13:15:37,252 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6e9779090e8d7e17e86f98b5d48d7f02 2023-07-15 13:15:37,252 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/08aa30773eb7e0fad7c7048380781f5e 2023-07-15 13:15:37,252 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1f25794acc157d14397cd229198162ea 2023-07-15 13:15:37,252 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c99428cbc0322a583a670eface18299 2023-07-15 13:15:37,252 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/268668eabc4ad70d18b3d08a94025adb 2023-07-15 13:15:37,256 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6e9779090e8d7e17e86f98b5d48d7f02/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6e9779090e8d7e17e86f98b5d48d7f02/recovered.edits] 2023-07-15 13:15:37,257 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c99428cbc0322a583a670eface18299/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c99428cbc0322a583a670eface18299/recovered.edits] 2023-07-15 13:15:37,257 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/268668eabc4ad70d18b3d08a94025adb/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/268668eabc4ad70d18b3d08a94025adb/recovered.edits] 2023-07-15 13:15:37,257 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1f25794acc157d14397cd229198162ea/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1f25794acc157d14397cd229198162ea/recovered.edits] 2023-07-15 13:15:37,257 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/08aa30773eb7e0fad7c7048380781f5e/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/08aa30773eb7e0fad7c7048380781f5e/recovered.edits] 2023-07-15 13:15:37,266 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c99428cbc0322a583a670eface18299/recovered.edits/4.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/Group_testTableMoveTruncateAndDrop/3c99428cbc0322a583a670eface18299/recovered.edits/4.seqid 2023-07-15 13:15:37,267 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/08aa30773eb7e0fad7c7048380781f5e/recovered.edits/4.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/Group_testTableMoveTruncateAndDrop/08aa30773eb7e0fad7c7048380781f5e/recovered.edits/4.seqid 2023-07-15 13:15:37,267 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/268668eabc4ad70d18b3d08a94025adb/recovered.edits/4.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/Group_testTableMoveTruncateAndDrop/268668eabc4ad70d18b3d08a94025adb/recovered.edits/4.seqid 2023-07-15 13:15:37,267 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1f25794acc157d14397cd229198162ea/recovered.edits/4.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/Group_testTableMoveTruncateAndDrop/1f25794acc157d14397cd229198162ea/recovered.edits/4.seqid 2023-07-15 13:15:37,268 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3c99428cbc0322a583a670eface18299 2023-07-15 13:15:37,268 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6e9779090e8d7e17e86f98b5d48d7f02/recovered.edits/4.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/Group_testTableMoveTruncateAndDrop/6e9779090e8d7e17e86f98b5d48d7f02/recovered.edits/4.seqid 2023-07-15 13:15:37,268 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/08aa30773eb7e0fad7c7048380781f5e 2023-07-15 13:15:37,269 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1f25794acc157d14397cd229198162ea 2023-07-15 13:15:37,269 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/268668eabc4ad70d18b3d08a94025adb 2023-07-15 13:15:37,269 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6e9779090e8d7e17e86f98b5d48d7f02 2023-07-15 13:15:37,269 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-15 13:15:37,272 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=80, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:37,278 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-15 13:15:37,281 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-15 13:15:37,283 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=80, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:37,283 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-15 13:15:37,283 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426937283"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:37,284 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426937283"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:37,284 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689426935841.1f25794acc157d14397cd229198162ea.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426937283"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:37,284 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689426935841.3c99428cbc0322a583a670eface18299.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426937283"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:37,284 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426937283"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:37,286 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-15 13:15:37,286 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 6e9779090e8d7e17e86f98b5d48d7f02, NAME => 'Group_testTableMoveTruncateAndDrop,,1689426935840.6e9779090e8d7e17e86f98b5d48d7f02.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 08aa30773eb7e0fad7c7048380781f5e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689426935841.08aa30773eb7e0fad7c7048380781f5e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 1f25794acc157d14397cd229198162ea, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689426935841.1f25794acc157d14397cd229198162ea.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 3c99428cbc0322a583a670eface18299, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689426935841.3c99428cbc0322a583a670eface18299.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 268668eabc4ad70d18b3d08a94025adb, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689426935841.268668eabc4ad70d18b3d08a94025adb.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-15 13:15:37,286 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-15 13:15:37,286 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689426937286"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:37,288 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-15 13:15:37,290 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=80, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 13:15:37,291 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=80, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 68 msec 2023-07-15 13:15:37,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-15 13:15:37,351 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 80 completed 2023-07-15 13:15:37,352 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:37,352 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:37,355 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34837] ipc.CallRunner(144): callId: 165 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:41210 deadline: 1689426997355, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44807 startCode=1689426930103. As of locationSeqNum=6. 2023-07-15 13:15:37,459 DEBUG [hconnection-0x412f866-shared-pool-10] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:15:37,461 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33942, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:15:37,469 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:37,469 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:37,470 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:37,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:37,470 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:37,471 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679] to rsgroup default 2023-07-15 13:15:37,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:37,475 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:37,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:37,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:37,482 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_348588243, current retry=0 2023-07-15 13:15:37,482 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34837,1689426926314, jenkins-hbase4.apache.org,37679,1689426926099] are moved back to Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:37,482 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_348588243 => default 2023-07-15 13:15:37,482 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:37,489 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_348588243 2023-07-15 13:15:37,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:37,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:37,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-15 13:15:37,499 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:37,500 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:37,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:37,501 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:37,502 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:15:37,502 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:37,503 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:15:37,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:37,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:15:37,510 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:37,515 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:15:37,516 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:15:37,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:37,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:37,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:37,522 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:37,526 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:37,527 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:37,530 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40693] to rsgroup master 2023-07-15 13:15:37,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:37,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 149 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36536 deadline: 1689428137530, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. 2023-07-15 13:15:37,531 WARN [Listener at localhost/38739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:15:37,533 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:37,534 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:37,534 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:37,536 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:44807], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:15:37,536 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:37,537 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:37,566 INFO [Listener at localhost/38739] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=506 (was 421) Potentially hanging thread: hconnection-0x412f866-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x412f866-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1555813780-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44807 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54157@0x3bf03c53 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1464669480.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1300411426_17 at /127.0.0.1:51880 [Receiving block BP-473868280-172.31.14.131-1689426920401:blk_1073741844_1020] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-473868280-172.31.14.131-1689426920401:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44807 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe-prefix:jenkins-hbase4.apache.org,44807,1689426930103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-473868280-172.31.14.131-1689426920401:blk_1073741844_1020, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44807 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44807 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:44807Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44807 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1896529575_17 at /127.0.0.1:47398 [Waiting for operation #11] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2003730860_17 at /127.0.0.1:35756 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1300411426_17 at /127.0.0.1:35740 [Receiving block BP-473868280-172.31.14.131-1689426920401:blk_1073741844_1020] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-473868280-172.31.14.131-1689426920401:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54157@0x3bf03c53-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:42517 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:44807-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1555813780-636-acceptor-0@2376f5c7-ServerConnector@2652937{HTTP/1.1, (http/1.1)}{0.0.0.0:35475} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1068863705_17 at /127.0.0.1:35698 [Receiving block BP-473868280-172.31.14.131-1689426920401:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-473868280-172.31.14.131-1689426920401:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2003730860_17 at /127.0.0.1:47448 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1068863705_17 at /127.0.0.1:51842 [Receiving block BP-473868280-172.31.14.131-1689426920401:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x412f866-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RSProcedureDispatcher-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-473868280-172.31.14.131-1689426920401:blk_1073741844_1020, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44807 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-473868280-172.31.14.131-1689426920401:blk_1073741844_1020, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1555813780-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1555813780-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x412f866-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44807 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1300411426_17 at /127.0.0.1:47428 [Receiving block BP-473868280-172.31.14.131-1689426920401:blk_1073741844_1020] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44807 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe-prefix:jenkins-hbase4.apache.org,38761,1689426926503.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1068863705_17 at /127.0.0.1:47374 [Receiving block BP-473868280-172.31.14.131-1689426920401:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1555813780-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-213758c7-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x412f866-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:42517 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1555813780-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44807 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1555813780-635 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/78138919.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1555813780-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54157@0x3bf03c53-SendThread(127.0.0.1:54157) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x412f866-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_100265041_17 at /127.0.0.1:52014 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44807 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:44807 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=820 (was 663) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=364 (was 361) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=3533 (was 3363) - AvailableMemoryMB LEAK? - 2023-07-15 13:15:37,567 WARN [Listener at localhost/38739] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-15 13:15:37,589 INFO [Listener at localhost/38739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=506, OpenFileDescriptor=820, MaxFileDescriptor=60000, SystemLoadAverage=364, ProcessCount=172, AvailableMemoryMB=3533 2023-07-15 13:15:37,589 WARN [Listener at localhost/38739] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-15 13:15:37,593 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-15 13:15:37,599 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:37,599 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:37,600 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:37,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:37,600 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:37,602 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:15:37,602 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:37,603 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:15:37,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:37,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:15:37,611 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:37,615 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:15:37,616 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:15:37,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:37,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:37,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:37,624 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:37,628 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:37,628 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:37,630 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40693] to rsgroup master 2023-07-15 13:15:37,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:37,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 177 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36536 deadline: 1689428137630, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. 2023-07-15 13:15:37,631 WARN [Listener at localhost/38739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:15:37,633 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:37,634 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:37,634 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:37,635 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:44807], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:15:37,635 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:37,636 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:37,637 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-15 13:15:37,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:37,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:36536 deadline: 1689428137637, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-15 13:15:37,638 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-15 13:15:37,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:37,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:36536 deadline: 1689428137638, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-15 13:15:37,640 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-15 13:15:37,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:37,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 187 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:36536 deadline: 1689428137640, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-15 13:15:37,649 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-15 13:15:37,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-15 13:15:37,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:37,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:37,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:37,657 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:37,661 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:37,662 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:37,676 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:37,677 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:37,678 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:37,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:37,678 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:37,680 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:15:37,680 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:37,681 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-15 13:15:37,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:37,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:37,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-15 13:15:37,690 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:37,691 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:37,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:37,691 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:37,692 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:15:37,692 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:37,693 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:15:37,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:37,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:15:37,702 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:37,706 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:15:37,708 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:15:37,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:37,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:37,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:37,716 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:37,722 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:37,722 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:37,732 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40693] to rsgroup master 2023-07-15 13:15:37,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:37,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 221 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36536 deadline: 1689428137732, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. 2023-07-15 13:15:37,733 WARN [Listener at localhost/38739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:15:37,735 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:37,736 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:37,736 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:37,737 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:44807], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:15:37,738 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:37,738 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:37,764 INFO [Listener at localhost/38739] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=509 (was 506) Potentially hanging thread: hconnection-0x22934466-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=820 (was 820), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=367 (was 364) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=3528 (was 3533) 2023-07-15 13:15:37,764 WARN [Listener at localhost/38739] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-15 13:15:37,789 INFO [Listener at localhost/38739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=509, OpenFileDescriptor=820, MaxFileDescriptor=60000, SystemLoadAverage=367, ProcessCount=172, AvailableMemoryMB=3527 2023-07-15 13:15:37,789 WARN [Listener at localhost/38739] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-15 13:15:37,789 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-15 13:15:37,794 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:37,795 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:37,796 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:37,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:37,797 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:37,798 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:15:37,798 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:37,800 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:15:37,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:37,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:15:37,807 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:37,810 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:15:37,811 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:15:37,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:37,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:37,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:37,818 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:37,822 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:37,822 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:37,825 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40693] to rsgroup master 2023-07-15 13:15:37,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:37,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 249 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36536 deadline: 1689428137825, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. 2023-07-15 13:15:37,825 WARN [Listener at localhost/38739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:15:37,827 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:37,828 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:37,828 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:37,828 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:44807], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:15:37,829 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:37,829 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:37,830 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:37,830 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:37,831 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:37,831 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:37,832 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-15 13:15:37,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:37,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-15 13:15:37,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:37,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:37,840 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:37,843 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:37,843 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:37,846 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:37679] to rsgroup bar 2023-07-15 13:15:37,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:37,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-15 13:15:37,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:37,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:37,851 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(238): Moving server region e42fabbf20609a275edbe64c71867bfc, which do not belong to RSGroup bar 2023-07-15 13:15:37,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=e42fabbf20609a275edbe64c71867bfc, REOPEN/MOVE 2023-07-15 13:15:37,852 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-15 13:15:37,853 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=e42fabbf20609a275edbe64c71867bfc, REOPEN/MOVE 2023-07-15 13:15:37,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=82, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-15 13:15:37,854 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=e42fabbf20609a275edbe64c71867bfc, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:37,854 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-15 13:15:37,855 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-15 13:15:37,855 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689426937854"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426937854"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426937854"}]},"ts":"1689426937854"} 2023-07-15 13:15:37,856 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38761,1689426926503, state=CLOSING 2023-07-15 13:15:37,857 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; CloseRegionProcedure e42fabbf20609a275edbe64c71867bfc, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:37,858 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-15 13:15:37,858 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 13:15:37,861 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=84, ppid=82, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:37,861 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=83, ppid=81, state=RUNNABLE; CloseRegionProcedure e42fabbf20609a275edbe64c71867bfc, server=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:38,014 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-15 13:15:38,015 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-15 13:15:38,015 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-15 13:15:38,015 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-15 13:15:38,015 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-15 13:15:38,015 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-15 13:15:38,016 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=40.27 KB heapSize=62.02 KB 2023-07-15 13:15:38,036 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=37.38 KB at sequenceid=105 (bloomFilter=false), to=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/.tmp/info/9d857f807610420e89b917151d70fa29 2023-07-15 13:15:38,043 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9d857f807610420e89b917151d70fa29 2023-07-15 13:15:38,057 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=105 (bloomFilter=false), to=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/.tmp/rep_barrier/192a354615cd436a88a4bad78104e326 2023-07-15 13:15:38,063 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 192a354615cd436a88a4bad78104e326 2023-07-15 13:15:38,078 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.73 KB at sequenceid=105 (bloomFilter=false), to=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/.tmp/table/a7226d05e0fb49df87d035517e92b0f6 2023-07-15 13:15:38,084 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a7226d05e0fb49df87d035517e92b0f6 2023-07-15 13:15:38,085 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/.tmp/info/9d857f807610420e89b917151d70fa29 as hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/info/9d857f807610420e89b917151d70fa29 2023-07-15 13:15:38,091 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9d857f807610420e89b917151d70fa29 2023-07-15 13:15:38,091 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/info/9d857f807610420e89b917151d70fa29, entries=41, sequenceid=105, filesize=9.6 K 2023-07-15 13:15:38,092 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/.tmp/rep_barrier/192a354615cd436a88a4bad78104e326 as hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/rep_barrier/192a354615cd436a88a4bad78104e326 2023-07-15 13:15:38,099 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 192a354615cd436a88a4bad78104e326 2023-07-15 13:15:38,099 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/rep_barrier/192a354615cd436a88a4bad78104e326, entries=10, sequenceid=105, filesize=6.1 K 2023-07-15 13:15:38,100 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/.tmp/table/a7226d05e0fb49df87d035517e92b0f6 as hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/table/a7226d05e0fb49df87d035517e92b0f6 2023-07-15 13:15:38,106 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a7226d05e0fb49df87d035517e92b0f6 2023-07-15 13:15:38,107 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/table/a7226d05e0fb49df87d035517e92b0f6, entries=11, sequenceid=105, filesize=6.0 K 2023-07-15 13:15:38,108 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~40.27 KB/41233, heapSize ~61.98 KB/63464, currentSize=0 B/0 for 1588230740 in 93ms, sequenceid=105, compaction requested=false 2023-07-15 13:15:38,119 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/recovered.edits/108.seqid, newMaxSeqId=108, maxSeqId=19 2023-07-15 13:15:38,120 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 13:15:38,120 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-15 13:15:38,120 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-15 13:15:38,120 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,44807,1689426930103 record at close sequenceid=105 2023-07-15 13:15:38,122 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-15 13:15:38,122 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-15 13:15:38,124 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=84, resume processing ppid=82 2023-07-15 13:15:38,124 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=84, ppid=82, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38761,1689426926503 in 264 msec 2023-07-15 13:15:38,125 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=82, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44807,1689426930103; forceNewPlan=false, retain=false 2023-07-15 13:15:38,275 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44807,1689426930103, state=OPENING 2023-07-15 13:15:38,277 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-15 13:15:38,277 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=82, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:38,277 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 13:15:38,434 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-15 13:15:38,434 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:15:38,436 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44807%2C1689426930103.meta, suffix=.meta, logDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,44807,1689426930103, archiveDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/oldWALs, maxLogs=32 2023-07-15 13:15:38,455 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33833,DS-834fac10-3713-40cb-b8d3-78d39d80cf56,DISK] 2023-07-15 13:15:38,455 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43995,DS-0f3a48f8-8cb0-454b-850a-8844cb779b84,DISK] 2023-07-15 13:15:38,460 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39307,DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35,DISK] 2023-07-15 13:15:38,463 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,44807,1689426930103/jenkins-hbase4.apache.org%2C44807%2C1689426930103.meta.1689426938437.meta 2023-07-15 13:15:38,466 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33833,DS-834fac10-3713-40cb-b8d3-78d39d80cf56,DISK], DatanodeInfoWithStorage[127.0.0.1:43995,DS-0f3a48f8-8cb0-454b-850a-8844cb779b84,DISK], DatanodeInfoWithStorage[127.0.0.1:39307,DS-7628a710-bcf4-4ebc-bb13-e8ea0be15a35,DISK]] 2023-07-15 13:15:38,467 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:38,467 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-15 13:15:38,467 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-15 13:15:38,467 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-15 13:15:38,467 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-15 13:15:38,467 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:38,467 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-15 13:15:38,467 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-15 13:15:38,469 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-15 13:15:38,470 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/info 2023-07-15 13:15:38,470 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/info 2023-07-15 13:15:38,471 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-15 13:15:38,483 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/info/0a8b942f47c44e2c834ccca876fd0ae3 2023-07-15 13:15:38,488 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9d857f807610420e89b917151d70fa29 2023-07-15 13:15:38,488 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/info/9d857f807610420e89b917151d70fa29 2023-07-15 13:15:38,488 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:38,489 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-15 13:15:38,489 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/rep_barrier 2023-07-15 13:15:38,490 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/rep_barrier 2023-07-15 13:15:38,490 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-15 13:15:38,496 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 192a354615cd436a88a4bad78104e326 2023-07-15 13:15:38,496 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/rep_barrier/192a354615cd436a88a4bad78104e326 2023-07-15 13:15:38,496 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:38,496 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-15 13:15:38,498 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/table 2023-07-15 13:15:38,498 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/table 2023-07-15 13:15:38,498 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-15 13:15:38,505 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/table/2a54995606bd4505a4407e782cd33ad0 2023-07-15 13:15:38,512 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a7226d05e0fb49df87d035517e92b0f6 2023-07-15 13:15:38,512 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/table/a7226d05e0fb49df87d035517e92b0f6 2023-07-15 13:15:38,512 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:38,513 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740 2023-07-15 13:15:38,514 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740 2023-07-15 13:15:38,516 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-15 13:15:38,518 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-15 13:15:38,520 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=109; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9908518880, jitterRate=-0.07719726860523224}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-15 13:15:38,520 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-15 13:15:38,521 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=85, masterSystemTime=1689426938429 2023-07-15 13:15:38,526 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-15 13:15:38,526 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-15 13:15:38,527 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44807,1689426930103, state=OPEN 2023-07-15 13:15:38,528 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-15 13:15:38,528 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 13:15:38,532 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=82 2023-07-15 13:15:38,532 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=82, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44807,1689426930103 in 251 msec 2023-07-15 13:15:38,535 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=82, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 678 msec 2023-07-15 13:15:38,680 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:38,684 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e42fabbf20609a275edbe64c71867bfc, disabling compactions & flushes 2023-07-15 13:15:38,684 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:38,684 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:38,684 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. after waiting 0 ms 2023-07-15 13:15:38,684 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:38,684 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e42fabbf20609a275edbe64c71867bfc 1/1 column families, dataSize=4.98 KB heapSize=8.39 KB 2023-07-15 13:15:38,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.98 KB at sequenceid=32 (bloomFilter=true), to=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/.tmp/m/974c54dcb6a84686a5acf26b0d05ca6b 2023-07-15 13:15:38,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 974c54dcb6a84686a5acf26b0d05ca6b 2023-07-15 13:15:38,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/.tmp/m/974c54dcb6a84686a5acf26b0d05ca6b as hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/m/974c54dcb6a84686a5acf26b0d05ca6b 2023-07-15 13:15:38,720 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 974c54dcb6a84686a5acf26b0d05ca6b 2023-07-15 13:15:38,721 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/m/974c54dcb6a84686a5acf26b0d05ca6b, entries=9, sequenceid=32, filesize=5.5 K 2023-07-15 13:15:38,722 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.98 KB/5100, heapSize ~8.38 KB/8576, currentSize=0 B/0 for e42fabbf20609a275edbe64c71867bfc in 38ms, sequenceid=32, compaction requested=false 2023-07-15 13:15:38,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/recovered.edits/35.seqid, newMaxSeqId=35, maxSeqId=12 2023-07-15 13:15:38,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 13:15:38,732 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:38,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e42fabbf20609a275edbe64c71867bfc: 2023-07-15 13:15:38,732 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e42fabbf20609a275edbe64c71867bfc move to jenkins-hbase4.apache.org,44807,1689426930103 record at close sequenceid=32 2023-07-15 13:15:38,734 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:38,734 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=e42fabbf20609a275edbe64c71867bfc, regionState=CLOSED 2023-07-15 13:15:38,735 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689426938734"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426938734"}]},"ts":"1689426938734"} 2023-07-15 13:15:38,735 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38761] ipc.CallRunner(144): callId: 195 service: ClientService methodName: Mutate size: 214 connection: 172.31.14.131:44972 deadline: 1689426998735, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44807 startCode=1689426930103. As of locationSeqNum=105. 2023-07-15 13:15:38,840 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-15 13:15:38,840 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; CloseRegionProcedure e42fabbf20609a275edbe64c71867bfc, server=jenkins-hbase4.apache.org,38761,1689426926503 in 981 msec 2023-07-15 13:15:38,840 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=e42fabbf20609a275edbe64c71867bfc, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44807,1689426930103; forceNewPlan=false, retain=false 2023-07-15 13:15:38,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-15 13:15:38,991 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=e42fabbf20609a275edbe64c71867bfc, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:38,991 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689426938991"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426938991"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426938991"}]},"ts":"1689426938991"} 2023-07-15 13:15:38,993 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=81, state=RUNNABLE; OpenRegionProcedure e42fabbf20609a275edbe64c71867bfc, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:39,149 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:39,149 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e42fabbf20609a275edbe64c71867bfc, NAME => 'hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:39,149 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-15 13:15:39,149 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. service=MultiRowMutationService 2023-07-15 13:15:39,149 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-15 13:15:39,149 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:39,150 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:39,150 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:39,150 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:39,151 INFO [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:39,152 DEBUG [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/m 2023-07-15 13:15:39,152 DEBUG [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/m 2023-07-15 13:15:39,153 INFO [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e42fabbf20609a275edbe64c71867bfc columnFamilyName m 2023-07-15 13:15:39,162 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 974c54dcb6a84686a5acf26b0d05ca6b 2023-07-15 13:15:39,162 DEBUG [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] regionserver.HStore(539): loaded hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/m/974c54dcb6a84686a5acf26b0d05ca6b 2023-07-15 13:15:39,168 DEBUG [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] regionserver.HStore(539): loaded hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/m/f63cee1a17324fa69b06e1ff700fd92a 2023-07-15 13:15:39,169 INFO [StoreOpener-e42fabbf20609a275edbe64c71867bfc-1] regionserver.HStore(310): Store=e42fabbf20609a275edbe64c71867bfc/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:39,169 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:39,171 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:39,174 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:39,175 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e42fabbf20609a275edbe64c71867bfc; next sequenceid=36; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@530b7c4f, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:39,176 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e42fabbf20609a275edbe64c71867bfc: 2023-07-15 13:15:39,176 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc., pid=86, masterSystemTime=1689426939145 2023-07-15 13:15:39,178 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:39,178 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:39,179 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=e42fabbf20609a275edbe64c71867bfc, regionState=OPEN, openSeqNum=36, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:39,179 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689426939179"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426939179"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426939179"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426939179"}]},"ts":"1689426939179"} 2023-07-15 13:15:39,183 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=81 2023-07-15 13:15:39,183 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=81, state=SUCCESS; OpenRegionProcedure e42fabbf20609a275edbe64c71867bfc, server=jenkins-hbase4.apache.org,44807,1689426930103 in 187 msec 2023-07-15 13:15:39,184 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=e42fabbf20609a275edbe64c71867bfc, REOPEN/MOVE in 1.3320 sec 2023-07-15 13:15:39,812 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-15 13:15:39,856 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34837,1689426926314, jenkins-hbase4.apache.org,37679,1689426926099, jenkins-hbase4.apache.org,38761,1689426926503] are moved back to default 2023-07-15 13:15:39,856 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-15 13:15:39,856 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:39,859 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38761] ipc.CallRunner(144): callId: 14 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:44982 deadline: 1689426999858, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44807 startCode=1689426930103. As of locationSeqNum=32. 2023-07-15 13:15:39,960 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38761] ipc.CallRunner(144): callId: 15 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:44982 deadline: 1689426999960, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44807 startCode=1689426930103. As of locationSeqNum=105. 2023-07-15 13:15:40,061 DEBUG [hconnection-0x22934466-shared-pool-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:15:40,063 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33950, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:15:40,075 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:40,075 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:40,077 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-15 13:15:40,078 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:40,080 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:40,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-15 13:15:40,082 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=87, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:15:40,083 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 87 2023-07-15 13:15:40,083 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38761] ipc.CallRunner(144): callId: 200 service: ClientService methodName: ExecService size: 528 connection: 172.31.14.131:44972 deadline: 1689427000083, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44807 startCode=1689426930103. As of locationSeqNum=32. 2023-07-15 13:15:40,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-15 13:15:40,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-15 13:15:40,188 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:40,189 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-15 13:15:40,189 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:40,190 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:40,200 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=87, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 13:15:40,202 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:40,202 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0 empty. 2023-07-15 13:15:40,203 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:40,203 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-15 13:15:40,230 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:40,231 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => d06d86bf0462e0116ce1563451ec31f0, NAME => 'Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:40,245 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:40,246 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing d06d86bf0462e0116ce1563451ec31f0, disabling compactions & flushes 2023-07-15 13:15:40,246 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:40,246 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:40,246 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. after waiting 0 ms 2023-07-15 13:15:40,246 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:40,246 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:40,246 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for d06d86bf0462e0116ce1563451ec31f0: 2023-07-15 13:15:40,248 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=87, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 13:15:40,249 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689426940249"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426940249"}]},"ts":"1689426940249"} 2023-07-15 13:15:40,251 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 13:15:40,252 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=87, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 13:15:40,252 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426940252"}]},"ts":"1689426940252"} 2023-07-15 13:15:40,254 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-15 13:15:40,258 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d06d86bf0462e0116ce1563451ec31f0, ASSIGN}] 2023-07-15 13:15:40,262 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d06d86bf0462e0116ce1563451ec31f0, ASSIGN 2023-07-15 13:15:40,264 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d06d86bf0462e0116ce1563451ec31f0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44807,1689426930103; forceNewPlan=false, retain=false 2023-07-15 13:15:40,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-15 13:15:40,416 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=d06d86bf0462e0116ce1563451ec31f0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:40,416 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689426940416"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426940416"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426940416"}]},"ts":"1689426940416"} 2023-07-15 13:15:40,421 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE; OpenRegionProcedure d06d86bf0462e0116ce1563451ec31f0, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:40,577 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:40,577 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d06d86bf0462e0116ce1563451ec31f0, NAME => 'Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:40,578 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:40,578 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:40,578 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:40,578 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:40,579 INFO [StoreOpener-d06d86bf0462e0116ce1563451ec31f0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:40,581 DEBUG [StoreOpener-d06d86bf0462e0116ce1563451ec31f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0/f 2023-07-15 13:15:40,581 DEBUG [StoreOpener-d06d86bf0462e0116ce1563451ec31f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0/f 2023-07-15 13:15:40,581 INFO [StoreOpener-d06d86bf0462e0116ce1563451ec31f0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d06d86bf0462e0116ce1563451ec31f0 columnFamilyName f 2023-07-15 13:15:40,582 INFO [StoreOpener-d06d86bf0462e0116ce1563451ec31f0-1] regionserver.HStore(310): Store=d06d86bf0462e0116ce1563451ec31f0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:40,583 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:40,583 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:40,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:40,588 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:40,589 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d06d86bf0462e0116ce1563451ec31f0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10962179200, jitterRate=0.020932495594024658}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:40,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d06d86bf0462e0116ce1563451ec31f0: 2023-07-15 13:15:40,589 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0., pid=89, masterSystemTime=1689426940573 2023-07-15 13:15:40,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:40,592 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:40,594 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=d06d86bf0462e0116ce1563451ec31f0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:40,594 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689426940593"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426940593"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426940593"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426940593"}]},"ts":"1689426940593"} 2023-07-15 13:15:40,599 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-15 13:15:40,599 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; OpenRegionProcedure d06d86bf0462e0116ce1563451ec31f0, server=jenkins-hbase4.apache.org,44807,1689426930103 in 175 msec 2023-07-15 13:15:40,601 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-15 13:15:40,601 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d06d86bf0462e0116ce1563451ec31f0, ASSIGN in 341 msec 2023-07-15 13:15:40,609 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=87, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 13:15:40,609 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426940609"}]},"ts":"1689426940609"} 2023-07-15 13:15:40,612 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-15 13:15:40,614 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=87, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 13:15:40,617 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 535 msec 2023-07-15 13:15:40,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-15 13:15:40,689 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 87 completed 2023-07-15 13:15:40,689 DEBUG [Listener at localhost/38739] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-15 13:15:40,689 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:40,708 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38761] ipc.CallRunner(144): callId: 278 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:44998 deadline: 1689427000707, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44807 startCode=1689426930103. As of locationSeqNum=105. 2023-07-15 13:15:40,809 DEBUG [hconnection-0x4925918a-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:15:40,811 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54786, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:15:40,824 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-15 13:15:40,824 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:40,824 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-15 13:15:40,827 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-15 13:15:40,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:40,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-15 13:15:40,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:40,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:40,832 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-15 13:15:40,832 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(345): Moving region d06d86bf0462e0116ce1563451ec31f0 to RSGroup bar 2023-07-15 13:15:40,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:40,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:40,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:40,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:40,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-15 13:15:40,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:40,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d06d86bf0462e0116ce1563451ec31f0, REOPEN/MOVE 2023-07-15 13:15:40,834 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-15 13:15:40,835 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d06d86bf0462e0116ce1563451ec31f0, REOPEN/MOVE 2023-07-15 13:15:40,839 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=d06d86bf0462e0116ce1563451ec31f0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:40,839 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689426940839"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426940839"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426940839"}]},"ts":"1689426940839"} 2023-07-15 13:15:40,841 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE; CloseRegionProcedure d06d86bf0462e0116ce1563451ec31f0, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:40,994 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:40,996 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d06d86bf0462e0116ce1563451ec31f0, disabling compactions & flushes 2023-07-15 13:15:40,996 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:40,996 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:40,996 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. after waiting 0 ms 2023-07-15 13:15:40,996 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:41,002 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:41,003 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:41,003 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d06d86bf0462e0116ce1563451ec31f0: 2023-07-15 13:15:41,003 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding d06d86bf0462e0116ce1563451ec31f0 move to jenkins-hbase4.apache.org,34837,1689426926314 record at close sequenceid=2 2023-07-15 13:15:41,006 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=d06d86bf0462e0116ce1563451ec31f0, regionState=CLOSED 2023-07-15 13:15:41,006 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689426941006"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426941006"}]},"ts":"1689426941006"} 2023-07-15 13:15:41,006 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:41,010 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-15 13:15:41,010 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; CloseRegionProcedure d06d86bf0462e0116ce1563451ec31f0, server=jenkins-hbase4.apache.org,44807,1689426930103 in 167 msec 2023-07-15 13:15:41,011 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d06d86bf0462e0116ce1563451ec31f0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34837,1689426926314; forceNewPlan=false, retain=false 2023-07-15 13:15:41,161 INFO [jenkins-hbase4:40693] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 13:15:41,162 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=d06d86bf0462e0116ce1563451ec31f0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:41,162 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689426941162"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426941162"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426941162"}]},"ts":"1689426941162"} 2023-07-15 13:15:41,164 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=90, state=RUNNABLE; OpenRegionProcedure d06d86bf0462e0116ce1563451ec31f0, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:41,320 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:41,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d06d86bf0462e0116ce1563451ec31f0, NAME => 'Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:41,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:41,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:41,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:41,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:41,322 INFO [StoreOpener-d06d86bf0462e0116ce1563451ec31f0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:41,323 DEBUG [StoreOpener-d06d86bf0462e0116ce1563451ec31f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0/f 2023-07-15 13:15:41,323 DEBUG [StoreOpener-d06d86bf0462e0116ce1563451ec31f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0/f 2023-07-15 13:15:41,323 INFO [StoreOpener-d06d86bf0462e0116ce1563451ec31f0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d06d86bf0462e0116ce1563451ec31f0 columnFamilyName f 2023-07-15 13:15:41,324 INFO [StoreOpener-d06d86bf0462e0116ce1563451ec31f0-1] regionserver.HStore(310): Store=d06d86bf0462e0116ce1563451ec31f0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:41,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:41,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:41,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:41,330 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d06d86bf0462e0116ce1563451ec31f0; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11062741920, jitterRate=0.030298128724098206}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:41,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d06d86bf0462e0116ce1563451ec31f0: 2023-07-15 13:15:41,331 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0., pid=92, masterSystemTime=1689426941316 2023-07-15 13:15:41,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:41,332 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:41,333 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=d06d86bf0462e0116ce1563451ec31f0, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:41,333 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689426941332"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426941332"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426941332"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426941332"}]},"ts":"1689426941332"} 2023-07-15 13:15:41,336 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=90 2023-07-15 13:15:41,336 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=90, state=SUCCESS; OpenRegionProcedure d06d86bf0462e0116ce1563451ec31f0, server=jenkins-hbase4.apache.org,34837,1689426926314 in 170 msec 2023-07-15 13:15:41,337 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d06d86bf0462e0116ce1563451ec31f0, REOPEN/MOVE in 503 msec 2023-07-15 13:15:41,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure.ProcedureSyncWait(216): waitFor pid=90 2023-07-15 13:15:41,835 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-15 13:15:41,835 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:41,839 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:41,840 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:41,842 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-15 13:15:41,843 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:41,843 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-15 13:15:41,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:41,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 288 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:36536 deadline: 1689428141843, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-15 13:15:41,845 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:37679] to rsgroup default 2023-07-15 13:15:41,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:41,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 290 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:36536 deadline: 1689428141845, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-15 13:15:41,848 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-15 13:15:41,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:41,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-15 13:15:41,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:41,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:41,853 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-15 13:15:41,853 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(345): Moving region d06d86bf0462e0116ce1563451ec31f0 to RSGroup default 2023-07-15 13:15:41,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d06d86bf0462e0116ce1563451ec31f0, REOPEN/MOVE 2023-07-15 13:15:41,854 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-15 13:15:41,855 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d06d86bf0462e0116ce1563451ec31f0, REOPEN/MOVE 2023-07-15 13:15:41,856 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=93 updating hbase:meta row=d06d86bf0462e0116ce1563451ec31f0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:41,856 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689426941856"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426941856"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426941856"}]},"ts":"1689426941856"} 2023-07-15 13:15:41,858 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=94, ppid=93, state=RUNNABLE; CloseRegionProcedure d06d86bf0462e0116ce1563451ec31f0, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:42,011 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:42,012 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d06d86bf0462e0116ce1563451ec31f0, disabling compactions & flushes 2023-07-15 13:15:42,012 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:42,012 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:42,012 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. after waiting 0 ms 2023-07-15 13:15:42,012 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:42,016 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 13:15:42,017 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:42,017 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d06d86bf0462e0116ce1563451ec31f0: 2023-07-15 13:15:42,017 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding d06d86bf0462e0116ce1563451ec31f0 move to jenkins-hbase4.apache.org,44807,1689426930103 record at close sequenceid=5 2023-07-15 13:15:42,018 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:42,019 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=93 updating hbase:meta row=d06d86bf0462e0116ce1563451ec31f0, regionState=CLOSED 2023-07-15 13:15:42,019 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689426942019"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426942019"}]},"ts":"1689426942019"} 2023-07-15 13:15:42,022 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=94, resume processing ppid=93 2023-07-15 13:15:42,022 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=94, ppid=93, state=SUCCESS; CloseRegionProcedure d06d86bf0462e0116ce1563451ec31f0, server=jenkins-hbase4.apache.org,34837,1689426926314 in 162 msec 2023-07-15 13:15:42,022 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=93, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d06d86bf0462e0116ce1563451ec31f0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44807,1689426930103; forceNewPlan=false, retain=false 2023-07-15 13:15:42,173 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=93 updating hbase:meta row=d06d86bf0462e0116ce1563451ec31f0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:42,173 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689426942173"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426942173"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426942173"}]},"ts":"1689426942173"} 2023-07-15 13:15:42,175 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=93, state=RUNNABLE; OpenRegionProcedure d06d86bf0462e0116ce1563451ec31f0, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:42,332 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:42,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d06d86bf0462e0116ce1563451ec31f0, NAME => 'Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:42,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:42,333 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:42,333 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:42,333 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:42,334 INFO [StoreOpener-d06d86bf0462e0116ce1563451ec31f0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:42,335 DEBUG [StoreOpener-d06d86bf0462e0116ce1563451ec31f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0/f 2023-07-15 13:15:42,335 DEBUG [StoreOpener-d06d86bf0462e0116ce1563451ec31f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0/f 2023-07-15 13:15:42,336 INFO [StoreOpener-d06d86bf0462e0116ce1563451ec31f0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d06d86bf0462e0116ce1563451ec31f0 columnFamilyName f 2023-07-15 13:15:42,337 INFO [StoreOpener-d06d86bf0462e0116ce1563451ec31f0-1] regionserver.HStore(310): Store=d06d86bf0462e0116ce1563451ec31f0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:42,337 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:42,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:42,339 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-15 13:15:42,340 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-15 13:15:42,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:42,345 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d06d86bf0462e0116ce1563451ec31f0; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10354233920, jitterRate=-0.03568682074546814}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:42,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d06d86bf0462e0116ce1563451ec31f0: 2023-07-15 13:15:42,347 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0., pid=95, masterSystemTime=1689426942327 2023-07-15 13:15:42,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:42,349 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:42,350 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=93 updating hbase:meta row=d06d86bf0462e0116ce1563451ec31f0, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:42,350 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689426942349"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426942349"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426942349"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426942349"}]},"ts":"1689426942349"} 2023-07-15 13:15:42,356 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=93 2023-07-15 13:15:42,356 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=93, state=SUCCESS; OpenRegionProcedure d06d86bf0462e0116ce1563451ec31f0, server=jenkins-hbase4.apache.org,44807,1689426930103 in 177 msec 2023-07-15 13:15:42,357 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d06d86bf0462e0116ce1563451ec31f0, REOPEN/MOVE in 503 msec 2023-07-15 13:15:42,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure.ProcedureSyncWait(216): waitFor pid=93 2023-07-15 13:15:42,854 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-15 13:15:42,854 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:42,859 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:42,859 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:42,863 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-15 13:15:42,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:42,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 297 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:36536 deadline: 1689428142863, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-15 13:15:42,865 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:37679] to rsgroup default 2023-07-15 13:15:42,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:42,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-15 13:15:42,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:42,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:42,871 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-15 13:15:42,871 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34837,1689426926314, jenkins-hbase4.apache.org,37679,1689426926099, jenkins-hbase4.apache.org,38761,1689426926503] are moved back to bar 2023-07-15 13:15:42,871 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-15 13:15:42,871 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:42,875 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:42,876 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:42,878 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-15 13:15:42,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:42,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:42,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-15 13:15:42,884 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:42,887 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:42,887 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:42,889 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:42,889 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:42,891 INFO [Listener at localhost/38739] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-15 13:15:42,891 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-15 13:15:42,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=96, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-15 13:15:42,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-15 13:15:42,895 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426942895"}]},"ts":"1689426942895"} 2023-07-15 13:15:42,896 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-15 13:15:42,898 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-15 13:15:42,899 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=97, ppid=96, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d06d86bf0462e0116ce1563451ec31f0, UNASSIGN}] 2023-07-15 13:15:42,901 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=97, ppid=96, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d06d86bf0462e0116ce1563451ec31f0, UNASSIGN 2023-07-15 13:15:42,901 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=d06d86bf0462e0116ce1563451ec31f0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:42,902 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689426942901"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426942901"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426942901"}]},"ts":"1689426942901"} 2023-07-15 13:15:42,903 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE; CloseRegionProcedure d06d86bf0462e0116ce1563451ec31f0, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:42,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-15 13:15:43,055 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:43,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d06d86bf0462e0116ce1563451ec31f0, disabling compactions & flushes 2023-07-15 13:15:43,057 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:43,057 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:43,057 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. after waiting 0 ms 2023-07-15 13:15:43,057 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:43,061 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-15 13:15:43,062 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0. 2023-07-15 13:15:43,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d06d86bf0462e0116ce1563451ec31f0: 2023-07-15 13:15:43,063 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:43,063 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=d06d86bf0462e0116ce1563451ec31f0, regionState=CLOSED 2023-07-15 13:15:43,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-15 13:15:43,249 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689426943063"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426943063"}]},"ts":"1689426943063"} 2023-07-15 13:15:43,253 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-15 13:15:43,253 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; CloseRegionProcedure d06d86bf0462e0116ce1563451ec31f0, server=jenkins-hbase4.apache.org,44807,1689426930103 in 348 msec 2023-07-15 13:15:43,255 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=97, resume processing ppid=96 2023-07-15 13:15:43,255 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=97, ppid=96, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=d06d86bf0462e0116ce1563451ec31f0, UNASSIGN in 354 msec 2023-07-15 13:15:43,256 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426943256"}]},"ts":"1689426943256"} 2023-07-15 13:15:43,268 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-15 13:15:43,271 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-15 13:15:43,277 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=96, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 381 msec 2023-07-15 13:15:43,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-15 13:15:43,549 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 96 completed 2023-07-15 13:15:43,553 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-15 13:15:43,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=99, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-15 13:15:43,563 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=99, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-15 13:15:43,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-15 13:15:43,566 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=99, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-15 13:15:43,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:43,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:43,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:43,574 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:43,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=99 2023-07-15 13:15:43,578 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0/recovered.edits] 2023-07-15 13:15:43,585 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0/recovered.edits/10.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0/recovered.edits/10.seqid 2023-07-15 13:15:43,585 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testFailRemoveGroup/d06d86bf0462e0116ce1563451ec31f0 2023-07-15 13:15:43,586 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-15 13:15:43,591 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=99, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-15 13:15:43,600 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-15 13:15:43,605 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-15 13:15:43,607 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=99, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-15 13:15:43,607 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-15 13:15:43,607 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426943607"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:43,612 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-15 13:15:43,612 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => d06d86bf0462e0116ce1563451ec31f0, NAME => 'Group_testFailRemoveGroup,,1689426940079.d06d86bf0462e0116ce1563451ec31f0.', STARTKEY => '', ENDKEY => ''}] 2023-07-15 13:15:43,612 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-15 13:15:43,613 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689426943613"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:43,615 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-15 13:15:43,617 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=99, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-15 13:15:43,618 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=99, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 63 msec 2023-07-15 13:15:43,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=99 2023-07-15 13:15:43,678 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 99 completed 2023-07-15 13:15:43,683 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:43,683 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:43,684 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:43,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:43,685 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:43,686 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:15:43,686 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:43,687 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:15:43,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:43,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:15:43,694 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:43,699 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:15:43,700 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:15:43,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:43,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:43,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:43,706 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:43,712 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:43,712 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:43,714 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40693] to rsgroup master 2023-07-15 13:15:43,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:43,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 347 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36536 deadline: 1689428143714, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. 2023-07-15 13:15:43,715 WARN [Listener at localhost/38739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:15:43,716 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:43,717 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:43,717 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:43,717 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:44807], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:15:43,718 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:43,718 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:43,738 INFO [Listener at localhost/38739] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=523 (was 509) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_100265041_17 at /127.0.0.1:52076 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/cluster_545a304e-a9e7-1a1d-1dcd-66099efd16d1/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x412f866-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x412f866-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x412f866-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4925918a-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1068863705_17 at /127.0.0.1:47600 [Receiving block BP-473868280-172.31.14.131-1689426920401:blk_1073741861_1037] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-473868280-172.31.14.131-1689426920401:blk_1073741861_1037, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1068863705_17 at /127.0.0.1:54524 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2003730860_17 at /127.0.0.1:35908 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/cluster_545a304e-a9e7-1a1d-1dcd-66099efd16d1/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x412f866-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/cluster_545a304e-a9e7-1a1d-1dcd-66099efd16d1/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-473868280-172.31.14.131-1689426920401:blk_1073741861_1037, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1068863705_17 at /127.0.0.1:52052 [Receiving block BP-473868280-172.31.14.131-1689426920401:blk_1073741861_1037] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1068863705_17 at /127.0.0.1:35884 [Receiving block BP-473868280-172.31.14.131-1689426920401:blk_1073741861_1037] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-473868280-172.31.14.131-1689426920401:blk_1073741861_1037, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x412f866-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/cluster_545a304e-a9e7-1a1d-1dcd-66099efd16d1/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x412f866-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe-prefix:jenkins-hbase4.apache.org,44807,1689426930103.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=817 (was 820), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=361 (was 367), ProcessCount=172 (was 172), AvailableMemoryMB=3322 (was 3527) 2023-07-15 13:15:43,738 WARN [Listener at localhost/38739] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-15 13:15:43,756 INFO [Listener at localhost/38739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=523, OpenFileDescriptor=817, MaxFileDescriptor=60000, SystemLoadAverage=361, ProcessCount=172, AvailableMemoryMB=3321 2023-07-15 13:15:43,756 WARN [Listener at localhost/38739] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-15 13:15:43,756 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-15 13:15:43,760 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:43,760 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:43,761 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:43,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:43,761 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:43,762 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:15:43,762 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:43,763 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:15:43,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:43,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:15:43,768 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:43,770 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:15:43,771 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:15:43,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:43,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:43,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:43,777 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:43,780 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:43,780 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:43,782 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40693] to rsgroup master 2023-07-15 13:15:43,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:43,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 375 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36536 deadline: 1689428143782, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. 2023-07-15 13:15:43,783 WARN [Listener at localhost/38739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:15:43,787 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:43,788 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:43,788 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:43,788 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:44807], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:15:43,789 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:43,789 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:43,790 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:43,790 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:43,791 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_394241202 2023-07-15 13:15:43,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_394241202 2023-07-15 13:15:43,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:43,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:43,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:43,797 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:43,800 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:43,800 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:43,803 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34837] to rsgroup Group_testMultiTableMove_394241202 2023-07-15 13:15:43,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:43,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_394241202 2023-07-15 13:15:43,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:43,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:43,808 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-15 13:15:43,808 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34837,1689426926314] are moved back to default 2023-07-15 13:15:43,808 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_394241202 2023-07-15 13:15:43,808 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:43,812 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:43,812 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:43,815 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_394241202 2023-07-15 13:15:43,815 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:43,817 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:43,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-15 13:15:43,825 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:15:43,825 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 100 2023-07-15 13:15:43,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-15 13:15:43,828 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:43,828 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_394241202 2023-07-15 13:15:43,829 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:43,829 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:43,835 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 13:15:43,836 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:43,837 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0 empty. 2023-07-15 13:15:43,837 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:43,838 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-15 13:15:43,866 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:43,870 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2ba520ddfc61118f39762080ca1ba5e0, NAME => 'GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:43,887 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:43,887 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 2ba520ddfc61118f39762080ca1ba5e0, disabling compactions & flushes 2023-07-15 13:15:43,888 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:43,888 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:43,888 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. after waiting 0 ms 2023-07-15 13:15:43,888 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:43,888 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:43,888 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 2ba520ddfc61118f39762080ca1ba5e0: 2023-07-15 13:15:43,890 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 13:15:43,891 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426943891"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426943891"}]},"ts":"1689426943891"} 2023-07-15 13:15:43,893 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 13:15:43,894 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 13:15:43,894 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426943894"}]},"ts":"1689426943894"} 2023-07-15 13:15:43,895 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-15 13:15:43,899 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:43,899 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:43,899 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:43,899 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:43,899 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:43,899 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2ba520ddfc61118f39762080ca1ba5e0, ASSIGN}] 2023-07-15 13:15:43,901 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2ba520ddfc61118f39762080ca1ba5e0, ASSIGN 2023-07-15 13:15:43,902 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2ba520ddfc61118f39762080ca1ba5e0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38761,1689426926503; forceNewPlan=false, retain=false 2023-07-15 13:15:43,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-15 13:15:44,053 INFO [jenkins-hbase4:40693] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 13:15:44,054 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=2ba520ddfc61118f39762080ca1ba5e0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:44,055 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426944054"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426944054"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426944054"}]},"ts":"1689426944054"} 2023-07-15 13:15:44,056 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; OpenRegionProcedure 2ba520ddfc61118f39762080ca1ba5e0, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:44,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-15 13:15:44,213 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:44,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2ba520ddfc61118f39762080ca1ba5e0, NAME => 'GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:44,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:44,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:44,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:44,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:44,215 INFO [StoreOpener-2ba520ddfc61118f39762080ca1ba5e0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:44,217 DEBUG [StoreOpener-2ba520ddfc61118f39762080ca1ba5e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0/f 2023-07-15 13:15:44,217 DEBUG [StoreOpener-2ba520ddfc61118f39762080ca1ba5e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0/f 2023-07-15 13:15:44,217 INFO [StoreOpener-2ba520ddfc61118f39762080ca1ba5e0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2ba520ddfc61118f39762080ca1ba5e0 columnFamilyName f 2023-07-15 13:15:44,218 INFO [StoreOpener-2ba520ddfc61118f39762080ca1ba5e0-1] regionserver.HStore(310): Store=2ba520ddfc61118f39762080ca1ba5e0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:44,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:44,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:44,229 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:44,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:44,238 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2ba520ddfc61118f39762080ca1ba5e0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10449000480, jitterRate=-0.026860997080802917}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:44,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2ba520ddfc61118f39762080ca1ba5e0: 2023-07-15 13:15:44,239 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0., pid=102, masterSystemTime=1689426944208 2023-07-15 13:15:44,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:44,241 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:44,241 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=2ba520ddfc61118f39762080ca1ba5e0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:44,241 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426944241"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426944241"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426944241"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426944241"}]},"ts":"1689426944241"} 2023-07-15 13:15:44,246 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-15 13:15:44,246 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; OpenRegionProcedure 2ba520ddfc61118f39762080ca1ba5e0, server=jenkins-hbase4.apache.org,38761,1689426926503 in 187 msec 2023-07-15 13:15:44,249 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-15 13:15:44,249 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2ba520ddfc61118f39762080ca1ba5e0, ASSIGN in 347 msec 2023-07-15 13:15:44,249 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 13:15:44,250 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426944249"}]},"ts":"1689426944249"} 2023-07-15 13:15:44,251 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-15 13:15:44,254 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 13:15:44,255 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 437 msec 2023-07-15 13:15:44,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-15 13:15:44,430 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 100 completed 2023-07-15 13:15:44,430 DEBUG [Listener at localhost/38739] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-15 13:15:44,430 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:44,434 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-15 13:15:44,434 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:44,434 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-15 13:15:44,436 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:44,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-15 13:15:44,439 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=103, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:15:44,439 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 103 2023-07-15 13:15:44,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-15 13:15:44,442 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:44,443 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_394241202 2023-07-15 13:15:44,443 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:44,444 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:44,446 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=103, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 13:15:44,448 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:44,449 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73 empty. 2023-07-15 13:15:44,449 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:44,449 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-15 13:15:44,464 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:44,465 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => ca56212d4bdef0cbf31498a247968a73, NAME => 'GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:44,480 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:44,480 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing ca56212d4bdef0cbf31498a247968a73, disabling compactions & flushes 2023-07-15 13:15:44,480 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:44,480 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:44,480 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. after waiting 0 ms 2023-07-15 13:15:44,480 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:44,480 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:44,480 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for ca56212d4bdef0cbf31498a247968a73: 2023-07-15 13:15:44,483 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=103, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 13:15:44,484 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426944483"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426944483"}]},"ts":"1689426944483"} 2023-07-15 13:15:44,485 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 13:15:44,486 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=103, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 13:15:44,486 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426944486"}]},"ts":"1689426944486"} 2023-07-15 13:15:44,487 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-15 13:15:44,490 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:44,490 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:44,490 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:44,490 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:44,490 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:44,490 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ca56212d4bdef0cbf31498a247968a73, ASSIGN}] 2023-07-15 13:15:44,492 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ca56212d4bdef0cbf31498a247968a73, ASSIGN 2023-07-15 13:15:44,493 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ca56212d4bdef0cbf31498a247968a73, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44807,1689426930103; forceNewPlan=false, retain=false 2023-07-15 13:15:44,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-15 13:15:44,649 INFO [jenkins-hbase4:40693] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 13:15:44,651 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=ca56212d4bdef0cbf31498a247968a73, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:44,651 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426944651"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426944651"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426944651"}]},"ts":"1689426944651"} 2023-07-15 13:15:44,655 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE; OpenRegionProcedure ca56212d4bdef0cbf31498a247968a73, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:44,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-15 13:15:44,810 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:44,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ca56212d4bdef0cbf31498a247968a73, NAME => 'GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:44,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:44,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:44,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:44,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:44,812 INFO [StoreOpener-ca56212d4bdef0cbf31498a247968a73-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:44,813 DEBUG [StoreOpener-ca56212d4bdef0cbf31498a247968a73-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73/f 2023-07-15 13:15:44,813 DEBUG [StoreOpener-ca56212d4bdef0cbf31498a247968a73-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73/f 2023-07-15 13:15:44,814 INFO [StoreOpener-ca56212d4bdef0cbf31498a247968a73-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ca56212d4bdef0cbf31498a247968a73 columnFamilyName f 2023-07-15 13:15:44,814 INFO [StoreOpener-ca56212d4bdef0cbf31498a247968a73-1] regionserver.HStore(310): Store=ca56212d4bdef0cbf31498a247968a73/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:44,815 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:44,815 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:44,818 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:44,820 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:44,821 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ca56212d4bdef0cbf31498a247968a73; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10918708000, jitterRate=0.01688392460346222}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:44,821 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ca56212d4bdef0cbf31498a247968a73: 2023-07-15 13:15:44,822 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73., pid=105, masterSystemTime=1689426944806 2023-07-15 13:15:44,823 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:44,824 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:44,824 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=ca56212d4bdef0cbf31498a247968a73, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:44,825 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426944824"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426944824"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426944824"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426944824"}]},"ts":"1689426944824"} 2023-07-15 13:15:44,828 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-15 13:15:44,828 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; OpenRegionProcedure ca56212d4bdef0cbf31498a247968a73, server=jenkins-hbase4.apache.org,44807,1689426930103 in 171 msec 2023-07-15 13:15:44,830 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=103 2023-07-15 13:15:44,830 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ca56212d4bdef0cbf31498a247968a73, ASSIGN in 338 msec 2023-07-15 13:15:44,831 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=103, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 13:15:44,831 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426944831"}]},"ts":"1689426944831"} 2023-07-15 13:15:44,832 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-15 13:15:44,835 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=103, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 13:15:44,836 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 399 msec 2023-07-15 13:15:45,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-15 13:15:45,044 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 103 completed 2023-07-15 13:15:45,045 DEBUG [Listener at localhost/38739] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-15 13:15:45,045 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:45,048 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-15 13:15:45,049 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:45,049 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-15 13:15:45,049 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:45,061 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-15 13:15:45,061 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 13:15:45,062 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-15 13:15:45,062 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 13:15:45,063 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_394241202 2023-07-15 13:15:45,066 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_394241202 2023-07-15 13:15:45,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:45,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_394241202 2023-07-15 13:15:45,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:45,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:45,071 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_394241202 2023-07-15 13:15:45,072 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(345): Moving region ca56212d4bdef0cbf31498a247968a73 to RSGroup Group_testMultiTableMove_394241202 2023-07-15 13:15:45,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ca56212d4bdef0cbf31498a247968a73, REOPEN/MOVE 2023-07-15 13:15:45,073 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_394241202 2023-07-15 13:15:45,076 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(345): Moving region 2ba520ddfc61118f39762080ca1ba5e0 to RSGroup Group_testMultiTableMove_394241202 2023-07-15 13:15:45,076 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ca56212d4bdef0cbf31498a247968a73, REOPEN/MOVE 2023-07-15 13:15:45,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2ba520ddfc61118f39762080ca1ba5e0, REOPEN/MOVE 2023-07-15 13:15:45,077 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=106 updating hbase:meta row=ca56212d4bdef0cbf31498a247968a73, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:45,077 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2ba520ddfc61118f39762080ca1ba5e0, REOPEN/MOVE 2023-07-15 13:15:45,077 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_394241202, current retry=0 2023-07-15 13:15:45,077 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426945077"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426945077"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426945077"}]},"ts":"1689426945077"} 2023-07-15 13:15:45,078 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=2ba520ddfc61118f39762080ca1ba5e0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:45,078 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426945078"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426945078"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426945078"}]},"ts":"1689426945078"} 2023-07-15 13:15:45,079 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=106, state=RUNNABLE; CloseRegionProcedure ca56212d4bdef0cbf31498a247968a73, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:45,081 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=107, state=RUNNABLE; CloseRegionProcedure 2ba520ddfc61118f39762080ca1ba5e0, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:45,233 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:45,234 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ca56212d4bdef0cbf31498a247968a73, disabling compactions & flushes 2023-07-15 13:15:45,234 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:45,234 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:45,234 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. after waiting 0 ms 2023-07-15 13:15:45,234 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:45,235 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:45,244 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2ba520ddfc61118f39762080ca1ba5e0, disabling compactions & flushes 2023-07-15 13:15:45,244 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:45,244 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:45,244 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. after waiting 0 ms 2023-07-15 13:15:45,244 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:45,245 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:45,245 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:45,246 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ca56212d4bdef0cbf31498a247968a73: 2023-07-15 13:15:45,246 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ca56212d4bdef0cbf31498a247968a73 move to jenkins-hbase4.apache.org,34837,1689426926314 record at close sequenceid=2 2023-07-15 13:15:45,247 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:45,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:45,249 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=106 updating hbase:meta row=ca56212d4bdef0cbf31498a247968a73, regionState=CLOSED 2023-07-15 13:15:45,249 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426945249"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426945249"}]},"ts":"1689426945249"} 2023-07-15 13:15:45,249 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:45,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2ba520ddfc61118f39762080ca1ba5e0: 2023-07-15 13:15:45,249 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2ba520ddfc61118f39762080ca1ba5e0 move to jenkins-hbase4.apache.org,34837,1689426926314 record at close sequenceid=2 2023-07-15 13:15:45,251 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:45,252 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=2ba520ddfc61118f39762080ca1ba5e0, regionState=CLOSED 2023-07-15 13:15:45,252 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426945252"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426945252"}]},"ts":"1689426945252"} 2023-07-15 13:15:45,252 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=106 2023-07-15 13:15:45,252 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=106, state=SUCCESS; CloseRegionProcedure ca56212d4bdef0cbf31498a247968a73, server=jenkins-hbase4.apache.org,44807,1689426930103 in 171 msec 2023-07-15 13:15:45,253 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=106, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ca56212d4bdef0cbf31498a247968a73, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34837,1689426926314; forceNewPlan=false, retain=false 2023-07-15 13:15:45,255 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=107 2023-07-15 13:15:45,255 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=107, state=SUCCESS; CloseRegionProcedure 2ba520ddfc61118f39762080ca1ba5e0, server=jenkins-hbase4.apache.org,38761,1689426926503 in 173 msec 2023-07-15 13:15:45,255 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=107, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2ba520ddfc61118f39762080ca1ba5e0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34837,1689426926314; forceNewPlan=false, retain=false 2023-07-15 13:15:45,404 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=106 updating hbase:meta row=ca56212d4bdef0cbf31498a247968a73, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:45,404 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=2ba520ddfc61118f39762080ca1ba5e0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:45,404 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426945404"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426945404"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426945404"}]},"ts":"1689426945404"} 2023-07-15 13:15:45,404 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426945404"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426945404"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426945404"}]},"ts":"1689426945404"} 2023-07-15 13:15:45,406 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=106, state=RUNNABLE; OpenRegionProcedure ca56212d4bdef0cbf31498a247968a73, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:45,407 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=107, state=RUNNABLE; OpenRegionProcedure 2ba520ddfc61118f39762080ca1ba5e0, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:45,562 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:45,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2ba520ddfc61118f39762080ca1ba5e0, NAME => 'GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:45,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:45,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:45,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:45,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:45,564 INFO [StoreOpener-2ba520ddfc61118f39762080ca1ba5e0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:45,565 DEBUG [StoreOpener-2ba520ddfc61118f39762080ca1ba5e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0/f 2023-07-15 13:15:45,565 DEBUG [StoreOpener-2ba520ddfc61118f39762080ca1ba5e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0/f 2023-07-15 13:15:45,566 INFO [StoreOpener-2ba520ddfc61118f39762080ca1ba5e0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2ba520ddfc61118f39762080ca1ba5e0 columnFamilyName f 2023-07-15 13:15:45,566 INFO [StoreOpener-2ba520ddfc61118f39762080ca1ba5e0-1] regionserver.HStore(310): Store=2ba520ddfc61118f39762080ca1ba5e0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:45,567 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:45,568 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:45,571 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:45,572 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2ba520ddfc61118f39762080ca1ba5e0; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11832876000, jitterRate=0.1020224541425705}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:45,572 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2ba520ddfc61118f39762080ca1ba5e0: 2023-07-15 13:15:45,572 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0., pid=111, masterSystemTime=1689426945558 2023-07-15 13:15:45,573 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:45,574 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:45,574 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:45,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ca56212d4bdef0cbf31498a247968a73, NAME => 'GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:45,574 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=2ba520ddfc61118f39762080ca1ba5e0, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:45,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:45,574 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426945574"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426945574"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426945574"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426945574"}]},"ts":"1689426945574"} 2023-07-15 13:15:45,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:45,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:45,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:45,576 INFO [StoreOpener-ca56212d4bdef0cbf31498a247968a73-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:45,577 DEBUG [StoreOpener-ca56212d4bdef0cbf31498a247968a73-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73/f 2023-07-15 13:15:45,577 DEBUG [StoreOpener-ca56212d4bdef0cbf31498a247968a73-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73/f 2023-07-15 13:15:45,577 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=107 2023-07-15 13:15:45,577 INFO [StoreOpener-ca56212d4bdef0cbf31498a247968a73-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ca56212d4bdef0cbf31498a247968a73 columnFamilyName f 2023-07-15 13:15:45,578 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=107, state=SUCCESS; OpenRegionProcedure 2ba520ddfc61118f39762080ca1ba5e0, server=jenkins-hbase4.apache.org,34837,1689426926314 in 169 msec 2023-07-15 13:15:45,578 INFO [StoreOpener-ca56212d4bdef0cbf31498a247968a73-1] regionserver.HStore(310): Store=ca56212d4bdef0cbf31498a247968a73/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:45,579 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2ba520ddfc61118f39762080ca1ba5e0, REOPEN/MOVE in 502 msec 2023-07-15 13:15:45,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:45,583 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:45,585 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:45,586 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ca56212d4bdef0cbf31498a247968a73; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9577885120, jitterRate=-0.10798993706703186}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:45,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ca56212d4bdef0cbf31498a247968a73: 2023-07-15 13:15:45,587 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73., pid=110, masterSystemTime=1689426945558 2023-07-15 13:15:45,588 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:45,588 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:45,589 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=106 updating hbase:meta row=ca56212d4bdef0cbf31498a247968a73, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:45,589 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426945589"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426945589"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426945589"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426945589"}]},"ts":"1689426945589"} 2023-07-15 13:15:45,592 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=106 2023-07-15 13:15:45,592 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=106, state=SUCCESS; OpenRegionProcedure ca56212d4bdef0cbf31498a247968a73, server=jenkins-hbase4.apache.org,34837,1689426926314 in 184 msec 2023-07-15 13:15:45,593 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ca56212d4bdef0cbf31498a247968a73, REOPEN/MOVE in 520 msec 2023-07-15 13:15:46,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure.ProcedureSyncWait(216): waitFor pid=106 2023-07-15 13:15:46,077 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_394241202. 2023-07-15 13:15:46,078 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:46,080 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-15 13:15:46,081 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:46,081 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:46,084 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-15 13:15:46,085 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 13:15:46,086 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-15 13:15:46,086 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 13:15:46,087 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:46,087 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:46,088 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_394241202 2023-07-15 13:15:46,088 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:46,090 INFO [Listener at localhost/38739] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-15 13:15:46,091 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-15 13:15:46,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=112, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-15 13:15:46,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-15 13:15:46,100 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426946100"}]},"ts":"1689426946100"} 2023-07-15 13:15:46,102 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-15 13:15:46,103 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-15 13:15:46,104 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=112, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2ba520ddfc61118f39762080ca1ba5e0, UNASSIGN}] 2023-07-15 13:15:46,105 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=113, ppid=112, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2ba520ddfc61118f39762080ca1ba5e0, UNASSIGN 2023-07-15 13:15:46,106 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=113 updating hbase:meta row=2ba520ddfc61118f39762080ca1ba5e0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:46,106 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426946106"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426946106"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426946106"}]},"ts":"1689426946106"} 2023-07-15 13:15:46,112 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=114, ppid=113, state=RUNNABLE; CloseRegionProcedure 2ba520ddfc61118f39762080ca1ba5e0, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:46,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-15 13:15:46,265 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:46,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2ba520ddfc61118f39762080ca1ba5e0, disabling compactions & flushes 2023-07-15 13:15:46,267 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:46,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:46,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. after waiting 0 ms 2023-07-15 13:15:46,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:46,273 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 13:15:46,274 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0. 2023-07-15 13:15:46,274 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2ba520ddfc61118f39762080ca1ba5e0: 2023-07-15 13:15:46,277 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:46,278 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=113 updating hbase:meta row=2ba520ddfc61118f39762080ca1ba5e0, regionState=CLOSED 2023-07-15 13:15:46,278 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426946277"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426946277"}]},"ts":"1689426946277"} 2023-07-15 13:15:46,281 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=114, resume processing ppid=113 2023-07-15 13:15:46,281 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=114, ppid=113, state=SUCCESS; CloseRegionProcedure 2ba520ddfc61118f39762080ca1ba5e0, server=jenkins-hbase4.apache.org,34837,1689426926314 in 167 msec 2023-07-15 13:15:46,283 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=112 2023-07-15 13:15:46,283 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=112, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2ba520ddfc61118f39762080ca1ba5e0, UNASSIGN in 177 msec 2023-07-15 13:15:46,283 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426946283"}]},"ts":"1689426946283"} 2023-07-15 13:15:46,285 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-15 13:15:46,286 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-15 13:15:46,294 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=112, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 202 msec 2023-07-15 13:15:46,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-15 13:15:46,401 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 112 completed 2023-07-15 13:15:46,402 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-15 13:15:46,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=115, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-15 13:15:46,406 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=115, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-15 13:15:46,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_394241202' 2023-07-15 13:15:46,409 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=115, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-15 13:15:46,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:46,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_394241202 2023-07-15 13:15:46,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:46,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:46,414 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:46,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=115 2023-07-15 13:15:46,417 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0/recovered.edits] 2023-07-15 13:15:46,424 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0/recovered.edits/7.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0/recovered.edits/7.seqid 2023-07-15 13:15:46,425 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveA/2ba520ddfc61118f39762080ca1ba5e0 2023-07-15 13:15:46,425 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-15 13:15:46,428 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=115, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-15 13:15:46,430 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-15 13:15:46,432 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-15 13:15:46,433 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=115, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-15 13:15:46,433 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-15 13:15:46,434 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426946433"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:46,439 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-15 13:15:46,439 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 2ba520ddfc61118f39762080ca1ba5e0, NAME => 'GrouptestMultiTableMoveA,,1689426943817.2ba520ddfc61118f39762080ca1ba5e0.', STARTKEY => '', ENDKEY => ''}] 2023-07-15 13:15:46,439 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-15 13:15:46,439 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689426946439"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:46,441 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-15 13:15:46,444 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=115, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-15 13:15:46,446 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=115, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 42 msec 2023-07-15 13:15:46,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=115 2023-07-15 13:15:46,518 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 115 completed 2023-07-15 13:15:46,519 INFO [Listener at localhost/38739] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-15 13:15:46,519 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-15 13:15:46,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=116, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-15 13:15:46,525 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426946524"}]},"ts":"1689426946524"} 2023-07-15 13:15:46,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-15 13:15:46,526 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-15 13:15:46,531 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-15 13:15:46,532 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=117, ppid=116, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ca56212d4bdef0cbf31498a247968a73, UNASSIGN}] 2023-07-15 13:15:46,533 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, ppid=116, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ca56212d4bdef0cbf31498a247968a73, UNASSIGN 2023-07-15 13:15:46,537 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=ca56212d4bdef0cbf31498a247968a73, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:46,537 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426946537"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426946537"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426946537"}]},"ts":"1689426946537"} 2023-07-15 13:15:46,539 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure ca56212d4bdef0cbf31498a247968a73, server=jenkins-hbase4.apache.org,34837,1689426926314}] 2023-07-15 13:15:46,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-15 13:15:46,696 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:46,697 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ca56212d4bdef0cbf31498a247968a73, disabling compactions & flushes 2023-07-15 13:15:46,697 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:46,697 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:46,697 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. after waiting 0 ms 2023-07-15 13:15:46,697 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:46,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 13:15:46,703 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73. 2023-07-15 13:15:46,703 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ca56212d4bdef0cbf31498a247968a73: 2023-07-15 13:15:46,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:46,707 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=ca56212d4bdef0cbf31498a247968a73, regionState=CLOSED 2023-07-15 13:15:46,707 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689426946707"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426946707"}]},"ts":"1689426946707"} 2023-07-15 13:15:46,711 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-15 13:15:46,711 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure ca56212d4bdef0cbf31498a247968a73, server=jenkins-hbase4.apache.org,34837,1689426926314 in 170 msec 2023-07-15 13:15:46,713 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=117, resume processing ppid=116 2023-07-15 13:15:46,714 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=117, ppid=116, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ca56212d4bdef0cbf31498a247968a73, UNASSIGN in 179 msec 2023-07-15 13:15:46,714 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426946714"}]},"ts":"1689426946714"} 2023-07-15 13:15:46,717 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-15 13:15:46,719 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-15 13:15:46,721 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=116, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 200 msec 2023-07-15 13:15:46,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-15 13:15:46,829 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 116 completed 2023-07-15 13:15:46,830 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-15 13:15:46,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=119, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-15 13:15:46,834 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=119, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-15 13:15:46,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_394241202' 2023-07-15 13:15:46,835 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=119, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-15 13:15:46,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:46,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_394241202 2023-07-15 13:15:46,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:46,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:46,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=119 2023-07-15 13:15:46,843 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:46,857 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73/recovered.edits] 2023-07-15 13:15:46,871 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73/recovered.edits/7.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73/recovered.edits/7.seqid 2023-07-15 13:15:46,879 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/GrouptestMultiTableMoveB/ca56212d4bdef0cbf31498a247968a73 2023-07-15 13:15:46,879 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-15 13:15:46,883 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=119, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-15 13:15:46,891 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-15 13:15:46,895 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-15 13:15:46,896 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=119, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-15 13:15:46,896 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-15 13:15:46,897 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426946896"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:46,899 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-15 13:15:46,899 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ca56212d4bdef0cbf31498a247968a73, NAME => 'GrouptestMultiTableMoveB,,1689426944436.ca56212d4bdef0cbf31498a247968a73.', STARTKEY => '', ENDKEY => ''}] 2023-07-15 13:15:46,899 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-15 13:15:46,899 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689426946899"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:46,901 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-15 13:15:46,907 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=119, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-15 13:15:46,908 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=119, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 77 msec 2023-07-15 13:15:46,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=119 2023-07-15 13:15:46,944 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 119 completed 2023-07-15 13:15:46,947 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:46,947 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:46,948 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:46,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:46,949 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:46,950 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34837] to rsgroup default 2023-07-15 13:15:46,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:46,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_394241202 2023-07-15 13:15:46,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:46,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:46,956 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_394241202, current retry=0 2023-07-15 13:15:46,956 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34837,1689426926314] are moved back to Group_testMultiTableMove_394241202 2023-07-15 13:15:46,956 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_394241202 => default 2023-07-15 13:15:46,956 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:46,957 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_394241202 2023-07-15 13:15:46,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:46,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:46,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-15 13:15:46,963 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:46,963 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:46,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:46,964 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:46,965 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:15:46,965 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:46,966 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:15:46,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:46,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:15:46,972 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:46,975 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:15:46,976 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:15:46,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:46,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:46,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:46,985 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:46,993 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:46,993 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:46,998 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40693] to rsgroup master 2023-07-15 13:15:46,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:46,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 513 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36536 deadline: 1689428146997, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. 2023-07-15 13:15:47,001 WARN [Listener at localhost/38739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:15:47,003 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:47,004 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:47,004 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:47,005 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:44807], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:15:47,006 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:47,006 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:47,031 INFO [Listener at localhost/38739] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=523 (was 523), OpenFileDescriptor=817 (was 817), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=361 (was 361), ProcessCount=172 (was 172), AvailableMemoryMB=3201 (was 3321) 2023-07-15 13:15:47,031 WARN [Listener at localhost/38739] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-15 13:15:47,048 INFO [Listener at localhost/38739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=523, OpenFileDescriptor=817, MaxFileDescriptor=60000, SystemLoadAverage=361, ProcessCount=172, AvailableMemoryMB=3201 2023-07-15 13:15:47,048 WARN [Listener at localhost/38739] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-15 13:15:47,048 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-15 13:15:47,052 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:47,052 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:47,053 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:47,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:47,053 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:47,054 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:15:47,054 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:47,055 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:15:47,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:15:47,060 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:47,063 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:15:47,063 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:15:47,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:47,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:47,072 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:47,075 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:47,075 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:47,079 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40693] to rsgroup master 2023-07-15 13:15:47,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:47,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 541 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36536 deadline: 1689428147079, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. 2023-07-15 13:15:47,079 WARN [Listener at localhost/38739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:15:47,081 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:47,082 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:47,082 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:47,082 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:44807], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:15:47,083 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:47,083 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:47,085 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:47,085 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:47,085 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-15 13:15:47,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-15 13:15:47,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:47,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:47,091 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:47,095 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:47,095 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:47,098 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679] to rsgroup oldGroup 2023-07-15 13:15:47,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-15 13:15:47,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:47,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:47,103 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-15 13:15:47,103 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34837,1689426926314, jenkins-hbase4.apache.org,37679,1689426926099] are moved back to default 2023-07-15 13:15:47,103 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-15 13:15:47,103 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:47,107 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:47,107 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:47,110 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-15 13:15:47,110 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:47,111 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-15 13:15:47,112 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:47,112 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:47,113 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:47,114 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-15 13:15:47,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-15 13:15:47,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-15 13:15:47,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:47,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 13:15:47,123 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:47,126 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:47,126 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:47,132 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38761] to rsgroup anotherRSGroup 2023-07-15 13:15:47,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-15 13:15:47,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-15 13:15:47,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:47,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 13:15:47,138 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-15 13:15:47,138 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38761,1689426926503] are moved back to default 2023-07-15 13:15:47,138 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-15 13:15:47,138 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:47,141 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:47,141 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:47,148 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-15 13:15:47,148 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:47,149 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-15 13:15:47,149 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:47,155 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-15 13:15:47,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:47,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:36536 deadline: 1689428147154, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-15 13:15:47,156 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-15 13:15:47,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:47,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 577 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:36536 deadline: 1689428147156, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-15 13:15:47,157 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-15 13:15:47,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:47,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 579 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:36536 deadline: 1689428147157, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-15 13:15:47,158 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-15 13:15:47,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:47,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 581 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:36536 deadline: 1689428147158, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-15 13:15:47,164 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:47,165 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:47,166 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:47,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:47,166 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:47,167 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38761] to rsgroup default 2023-07-15 13:15:47,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-15 13:15:47,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-15 13:15:47,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:47,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 13:15:47,177 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-15 13:15:47,177 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38761,1689426926503] are moved back to anotherRSGroup 2023-07-15 13:15:47,177 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-15 13:15:47,177 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:47,178 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-15 13:15:47,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-15 13:15:47,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:47,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-15 13:15:47,185 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:47,186 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:47,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:47,186 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:47,187 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679] to rsgroup default 2023-07-15 13:15:47,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-15 13:15:47,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:47,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:47,192 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-15 13:15:47,192 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34837,1689426926314, jenkins-hbase4.apache.org,37679,1689426926099] are moved back to oldGroup 2023-07-15 13:15:47,192 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-15 13:15:47,192 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:47,193 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-15 13:15:47,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:47,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-15 13:15:47,198 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:47,198 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:47,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:47,199 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:47,199 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:15:47,199 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:47,200 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:15:47,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:15:47,205 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:47,208 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:15:47,209 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:15:47,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:47,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:47,213 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:47,216 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:47,216 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:47,217 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40693] to rsgroup master 2023-07-15 13:15:47,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:47,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 617 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36536 deadline: 1689428147217, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. 2023-07-15 13:15:47,218 WARN [Listener at localhost/38739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:15:47,220 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:47,220 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:47,220 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:47,220 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:44807], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:15:47,221 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:47,221 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:47,238 INFO [Listener at localhost/38739] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=526 (was 523) Potentially hanging thread: hconnection-0x22934466-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=815 (was 817), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=361 (was 361), ProcessCount=172 (was 172), AvailableMemoryMB=3210 (was 3201) - AvailableMemoryMB LEAK? - 2023-07-15 13:15:47,238 WARN [Listener at localhost/38739] hbase.ResourceChecker(130): Thread=526 is superior to 500 2023-07-15 13:15:47,254 INFO [Listener at localhost/38739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=526, OpenFileDescriptor=815, MaxFileDescriptor=60000, SystemLoadAverage=361, ProcessCount=172, AvailableMemoryMB=3209 2023-07-15 13:15:47,254 WARN [Listener at localhost/38739] hbase.ResourceChecker(130): Thread=526 is superior to 500 2023-07-15 13:15:47,254 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-15 13:15:47,258 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:47,258 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:47,259 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:47,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:47,259 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:47,260 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:15:47,260 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:47,260 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:15:47,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:15:47,265 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:47,267 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:15:47,267 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:15:47,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:47,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:47,275 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:47,277 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:47,277 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:47,279 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40693] to rsgroup master 2023-07-15 13:15:47,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:47,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 645 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36536 deadline: 1689428147279, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. 2023-07-15 13:15:47,280 WARN [Listener at localhost/38739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:15:47,281 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:47,281 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:47,282 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:47,282 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:44807], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:15:47,282 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:47,282 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:47,283 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:47,283 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:47,284 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-15 13:15:47,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-15 13:15:47,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:47,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:47,290 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:47,292 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:47,292 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:47,294 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679] to rsgroup oldgroup 2023-07-15 13:15:47,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-15 13:15:47,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:47,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:47,298 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-15 13:15:47,298 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34837,1689426926314, jenkins-hbase4.apache.org,37679,1689426926099] are moved back to default 2023-07-15 13:15:47,298 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-15 13:15:47,298 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:47,300 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:47,300 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:47,302 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-15 13:15:47,302 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:47,303 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:47,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-15 13:15:47,306 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:15:47,306 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 120 2023-07-15 13:15:47,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-15 13:15:47,308 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-15 13:15:47,308 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,309 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:47,309 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:47,311 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 13:15:47,312 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/testRename/d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:47,313 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/testRename/d8243996734f4532a436ec0fa187e59a empty. 2023-07-15 13:15:47,313 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/testRename/d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:47,313 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-15 13:15:47,326 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:47,327 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => d8243996734f4532a436ec0fa187e59a, NAME => 'testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:47,337 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:47,337 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing d8243996734f4532a436ec0fa187e59a, disabling compactions & flushes 2023-07-15 13:15:47,337 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:47,337 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:47,337 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. after waiting 0 ms 2023-07-15 13:15:47,337 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:47,337 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:47,337 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for d8243996734f4532a436ec0fa187e59a: 2023-07-15 13:15:47,339 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 13:15:47,340 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689426947340"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426947340"}]},"ts":"1689426947340"} 2023-07-15 13:15:47,342 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 13:15:47,347 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 13:15:47,347 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426947347"}]},"ts":"1689426947347"} 2023-07-15 13:15:47,348 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-15 13:15:47,351 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:47,351 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:47,351 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:47,351 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:47,352 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=d8243996734f4532a436ec0fa187e59a, ASSIGN}] 2023-07-15 13:15:47,353 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=d8243996734f4532a436ec0fa187e59a, ASSIGN 2023-07-15 13:15:47,354 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=d8243996734f4532a436ec0fa187e59a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38761,1689426926503; forceNewPlan=false, retain=false 2023-07-15 13:15:47,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-15 13:15:47,504 INFO [jenkins-hbase4:40693] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 13:15:47,506 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=d8243996734f4532a436ec0fa187e59a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:47,506 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689426947506"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426947506"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426947506"}]},"ts":"1689426947506"} 2023-07-15 13:15:47,508 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure d8243996734f4532a436ec0fa187e59a, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:47,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-15 13:15:47,664 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:47,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d8243996734f4532a436ec0fa187e59a, NAME => 'testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:47,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:47,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:47,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:47,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:47,666 INFO [StoreOpener-d8243996734f4532a436ec0fa187e59a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:47,667 DEBUG [StoreOpener-d8243996734f4532a436ec0fa187e59a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/testRename/d8243996734f4532a436ec0fa187e59a/tr 2023-07-15 13:15:47,667 DEBUG [StoreOpener-d8243996734f4532a436ec0fa187e59a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/testRename/d8243996734f4532a436ec0fa187e59a/tr 2023-07-15 13:15:47,668 INFO [StoreOpener-d8243996734f4532a436ec0fa187e59a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d8243996734f4532a436ec0fa187e59a columnFamilyName tr 2023-07-15 13:15:47,668 INFO [StoreOpener-d8243996734f4532a436ec0fa187e59a-1] regionserver.HStore(310): Store=d8243996734f4532a436ec0fa187e59a/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:47,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/testRename/d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:47,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/testRename/d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:47,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:47,674 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/testRename/d8243996734f4532a436ec0fa187e59a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:47,674 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d8243996734f4532a436ec0fa187e59a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10208558080, jitterRate=-0.04925394058227539}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:47,674 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d8243996734f4532a436ec0fa187e59a: 2023-07-15 13:15:47,675 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a., pid=122, masterSystemTime=1689426947660 2023-07-15 13:15:47,676 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:47,676 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:47,677 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=d8243996734f4532a436ec0fa187e59a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:47,677 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689426947677"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426947677"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426947677"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426947677"}]},"ts":"1689426947677"} 2023-07-15 13:15:47,680 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-15 13:15:47,680 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure d8243996734f4532a436ec0fa187e59a, server=jenkins-hbase4.apache.org,38761,1689426926503 in 170 msec 2023-07-15 13:15:47,681 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-15 13:15:47,681 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=d8243996734f4532a436ec0fa187e59a, ASSIGN in 328 msec 2023-07-15 13:15:47,682 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 13:15:47,682 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426947682"}]},"ts":"1689426947682"} 2023-07-15 13:15:47,683 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-15 13:15:47,685 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 13:15:47,686 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=testRename in 381 msec 2023-07-15 13:15:47,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-15 13:15:47,910 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 120 completed 2023-07-15 13:15:47,910 DEBUG [Listener at localhost/38739] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-15 13:15:47,910 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:47,914 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-15 13:15:47,914 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:47,915 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-15 13:15:47,917 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-15 13:15:47,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-15 13:15:47,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:47,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:47,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:47,921 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-15 13:15:47,921 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(345): Moving region d8243996734f4532a436ec0fa187e59a to RSGroup oldgroup 2023-07-15 13:15:47,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:47,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:47,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:47,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:47,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:47,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=d8243996734f4532a436ec0fa187e59a, REOPEN/MOVE 2023-07-15 13:15:47,923 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-15 13:15:47,923 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=d8243996734f4532a436ec0fa187e59a, REOPEN/MOVE 2023-07-15 13:15:47,923 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=d8243996734f4532a436ec0fa187e59a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:47,923 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689426947923"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426947923"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426947923"}]},"ts":"1689426947923"} 2023-07-15 13:15:47,925 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure d8243996734f4532a436ec0fa187e59a, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:48,078 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:48,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d8243996734f4532a436ec0fa187e59a, disabling compactions & flushes 2023-07-15 13:15:48,079 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:48,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:48,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. after waiting 0 ms 2023-07-15 13:15:48,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:48,083 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/testRename/d8243996734f4532a436ec0fa187e59a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:48,083 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:48,083 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d8243996734f4532a436ec0fa187e59a: 2023-07-15 13:15:48,083 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding d8243996734f4532a436ec0fa187e59a move to jenkins-hbase4.apache.org,37679,1689426926099 record at close sequenceid=2 2023-07-15 13:15:48,085 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:48,086 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=d8243996734f4532a436ec0fa187e59a, regionState=CLOSED 2023-07-15 13:15:48,086 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689426948086"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426948086"}]},"ts":"1689426948086"} 2023-07-15 13:15:48,089 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-15 13:15:48,089 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure d8243996734f4532a436ec0fa187e59a, server=jenkins-hbase4.apache.org,38761,1689426926503 in 163 msec 2023-07-15 13:15:48,090 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=d8243996734f4532a436ec0fa187e59a, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37679,1689426926099; forceNewPlan=false, retain=false 2023-07-15 13:15:48,240 INFO [jenkins-hbase4:40693] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 13:15:48,240 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=d8243996734f4532a436ec0fa187e59a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:48,241 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689426948240"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426948240"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426948240"}]},"ts":"1689426948240"} 2023-07-15 13:15:48,242 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure d8243996734f4532a436ec0fa187e59a, server=jenkins-hbase4.apache.org,37679,1689426926099}] 2023-07-15 13:15:48,404 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:48,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d8243996734f4532a436ec0fa187e59a, NAME => 'testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:48,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:48,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:48,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:48,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:48,406 INFO [StoreOpener-d8243996734f4532a436ec0fa187e59a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:48,407 DEBUG [StoreOpener-d8243996734f4532a436ec0fa187e59a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/testRename/d8243996734f4532a436ec0fa187e59a/tr 2023-07-15 13:15:48,407 DEBUG [StoreOpener-d8243996734f4532a436ec0fa187e59a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/testRename/d8243996734f4532a436ec0fa187e59a/tr 2023-07-15 13:15:48,408 INFO [StoreOpener-d8243996734f4532a436ec0fa187e59a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d8243996734f4532a436ec0fa187e59a columnFamilyName tr 2023-07-15 13:15:48,409 INFO [StoreOpener-d8243996734f4532a436ec0fa187e59a-1] regionserver.HStore(310): Store=d8243996734f4532a436ec0fa187e59a/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:48,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/testRename/d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:48,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/testRename/d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:48,415 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:48,416 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d8243996734f4532a436ec0fa187e59a; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11592199040, jitterRate=0.07960766553878784}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:48,416 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d8243996734f4532a436ec0fa187e59a: 2023-07-15 13:15:48,417 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a., pid=125, masterSystemTime=1689426948394 2023-07-15 13:15:48,419 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:48,419 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:48,419 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=d8243996734f4532a436ec0fa187e59a, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:48,419 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689426948419"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426948419"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426948419"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426948419"}]},"ts":"1689426948419"} 2023-07-15 13:15:48,422 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-15 13:15:48,422 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure d8243996734f4532a436ec0fa187e59a, server=jenkins-hbase4.apache.org,37679,1689426926099 in 178 msec 2023-07-15 13:15:48,423 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=d8243996734f4532a436ec0fa187e59a, REOPEN/MOVE in 500 msec 2023-07-15 13:15:48,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-15 13:15:48,923 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-15 13:15:48,923 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:48,926 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:48,926 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:48,928 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:48,929 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-15 13:15:48,929 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 13:15:48,930 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-15 13:15:48,930 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:48,931 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-15 13:15:48,931 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 13:15:48,932 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:48,932 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:48,933 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-15 13:15:48,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-15 13:15:48,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-15 13:15:48,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:48,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:48,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 13:15:48,945 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:48,948 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:48,948 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:48,951 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38761] to rsgroup normal 2023-07-15 13:15:48,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-15 13:15:48,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-15 13:15:48,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:48,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:48,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 13:15:48,956 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-15 13:15:48,956 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38761,1689426926503] are moved back to default 2023-07-15 13:15:48,956 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-15 13:15:48,956 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:48,959 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:48,959 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:48,961 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-15 13:15:48,961 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:48,963 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:48,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-15 13:15:48,966 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:15:48,967 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 126 2023-07-15 13:15:48,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-15 13:15:48,968 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-15 13:15:48,969 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-15 13:15:48,969 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:48,969 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:48,970 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 13:15:48,972 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 13:15:48,974 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:48,974 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489 empty. 2023-07-15 13:15:48,975 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:48,975 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-15 13:15:48,991 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:48,992 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7cbf6b928f4a5822d479aa1ed8d58489, NAME => 'unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:49,005 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:49,006 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 7cbf6b928f4a5822d479aa1ed8d58489, disabling compactions & flushes 2023-07-15 13:15:49,006 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:49,006 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:49,006 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. after waiting 0 ms 2023-07-15 13:15:49,006 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:49,006 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:49,006 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 7cbf6b928f4a5822d479aa1ed8d58489: 2023-07-15 13:15:49,008 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 13:15:49,009 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689426949009"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426949009"}]},"ts":"1689426949009"} 2023-07-15 13:15:49,014 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 13:15:49,015 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 13:15:49,015 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426949015"}]},"ts":"1689426949015"} 2023-07-15 13:15:49,017 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-15 13:15:49,020 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=7cbf6b928f4a5822d479aa1ed8d58489, ASSIGN}] 2023-07-15 13:15:49,022 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=7cbf6b928f4a5822d479aa1ed8d58489, ASSIGN 2023-07-15 13:15:49,022 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=7cbf6b928f4a5822d479aa1ed8d58489, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44807,1689426930103; forceNewPlan=false, retain=false 2023-07-15 13:15:49,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-15 13:15:49,174 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=7cbf6b928f4a5822d479aa1ed8d58489, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:49,174 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689426949174"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426949174"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426949174"}]},"ts":"1689426949174"} 2023-07-15 13:15:49,175 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=127, state=RUNNABLE; OpenRegionProcedure 7cbf6b928f4a5822d479aa1ed8d58489, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:49,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-15 13:15:49,331 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:49,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7cbf6b928f4a5822d479aa1ed8d58489, NAME => 'unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:49,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:49,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:49,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:49,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:49,335 INFO [StoreOpener-7cbf6b928f4a5822d479aa1ed8d58489-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:49,337 DEBUG [StoreOpener-7cbf6b928f4a5822d479aa1ed8d58489-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489/ut 2023-07-15 13:15:49,337 DEBUG [StoreOpener-7cbf6b928f4a5822d479aa1ed8d58489-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489/ut 2023-07-15 13:15:49,338 INFO [StoreOpener-7cbf6b928f4a5822d479aa1ed8d58489-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7cbf6b928f4a5822d479aa1ed8d58489 columnFamilyName ut 2023-07-15 13:15:49,338 INFO [StoreOpener-7cbf6b928f4a5822d479aa1ed8d58489-1] regionserver.HStore(310): Store=7cbf6b928f4a5822d479aa1ed8d58489/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:49,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:49,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:49,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:49,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:49,346 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7cbf6b928f4a5822d479aa1ed8d58489; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10936539840, jitterRate=0.018544644117355347}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:49,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7cbf6b928f4a5822d479aa1ed8d58489: 2023-07-15 13:15:49,347 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489., pid=128, masterSystemTime=1689426949327 2023-07-15 13:15:49,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:49,349 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:49,350 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=7cbf6b928f4a5822d479aa1ed8d58489, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:49,350 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689426949350"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426949350"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426949350"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426949350"}]},"ts":"1689426949350"} 2023-07-15 13:15:49,353 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=127 2023-07-15 13:15:49,353 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=127, state=SUCCESS; OpenRegionProcedure 7cbf6b928f4a5822d479aa1ed8d58489, server=jenkins-hbase4.apache.org,44807,1689426930103 in 176 msec 2023-07-15 13:15:49,355 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-15 13:15:49,355 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=7cbf6b928f4a5822d479aa1ed8d58489, ASSIGN in 333 msec 2023-07-15 13:15:49,356 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 13:15:49,356 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426949356"}]},"ts":"1689426949356"} 2023-07-15 13:15:49,357 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-15 13:15:49,360 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 13:15:49,362 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; CreateTableProcedure table=unmovedTable in 397 msec 2023-07-15 13:15:49,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-15 13:15:49,571 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 126 completed 2023-07-15 13:15:49,571 DEBUG [Listener at localhost/38739] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-15 13:15:49,571 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:49,575 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-15 13:15:49,575 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:49,576 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-15 13:15:49,577 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-15 13:15:49,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-15 13:15:49,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-15 13:15:49,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:49,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:49,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 13:15:49,583 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-15 13:15:49,583 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(345): Moving region 7cbf6b928f4a5822d479aa1ed8d58489 to RSGroup normal 2023-07-15 13:15:49,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=7cbf6b928f4a5822d479aa1ed8d58489, REOPEN/MOVE 2023-07-15 13:15:49,584 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-15 13:15:49,584 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=7cbf6b928f4a5822d479aa1ed8d58489, REOPEN/MOVE 2023-07-15 13:15:49,585 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=7cbf6b928f4a5822d479aa1ed8d58489, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:49,585 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689426949585"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426949585"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426949585"}]},"ts":"1689426949585"} 2023-07-15 13:15:49,587 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 7cbf6b928f4a5822d479aa1ed8d58489, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:49,739 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:49,741 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7cbf6b928f4a5822d479aa1ed8d58489, disabling compactions & flushes 2023-07-15 13:15:49,741 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:49,741 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:49,741 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. after waiting 0 ms 2023-07-15 13:15:49,741 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:49,744 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:49,745 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:49,745 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7cbf6b928f4a5822d479aa1ed8d58489: 2023-07-15 13:15:49,745 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7cbf6b928f4a5822d479aa1ed8d58489 move to jenkins-hbase4.apache.org,38761,1689426926503 record at close sequenceid=2 2023-07-15 13:15:49,746 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:49,747 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=7cbf6b928f4a5822d479aa1ed8d58489, regionState=CLOSED 2023-07-15 13:15:49,747 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689426949747"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426949747"}]},"ts":"1689426949747"} 2023-07-15 13:15:49,749 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-15 13:15:49,749 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 7cbf6b928f4a5822d479aa1ed8d58489, server=jenkins-hbase4.apache.org,44807,1689426930103 in 162 msec 2023-07-15 13:15:49,750 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=7cbf6b928f4a5822d479aa1ed8d58489, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38761,1689426926503; forceNewPlan=false, retain=false 2023-07-15 13:15:49,900 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=7cbf6b928f4a5822d479aa1ed8d58489, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:49,901 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689426949900"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426949900"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426949900"}]},"ts":"1689426949900"} 2023-07-15 13:15:49,902 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 7cbf6b928f4a5822d479aa1ed8d58489, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:50,057 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:50,057 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7cbf6b928f4a5822d479aa1ed8d58489, NAME => 'unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:50,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:50,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:50,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:50,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:50,059 INFO [StoreOpener-7cbf6b928f4a5822d479aa1ed8d58489-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:50,060 DEBUG [StoreOpener-7cbf6b928f4a5822d479aa1ed8d58489-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489/ut 2023-07-15 13:15:50,060 DEBUG [StoreOpener-7cbf6b928f4a5822d479aa1ed8d58489-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489/ut 2023-07-15 13:15:50,060 INFO [StoreOpener-7cbf6b928f4a5822d479aa1ed8d58489-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7cbf6b928f4a5822d479aa1ed8d58489 columnFamilyName ut 2023-07-15 13:15:50,061 INFO [StoreOpener-7cbf6b928f4a5822d479aa1ed8d58489-1] regionserver.HStore(310): Store=7cbf6b928f4a5822d479aa1ed8d58489/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:50,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:50,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:50,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:50,066 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7cbf6b928f4a5822d479aa1ed8d58489; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12053480960, jitterRate=0.12256789207458496}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:50,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7cbf6b928f4a5822d479aa1ed8d58489: 2023-07-15 13:15:50,066 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489., pid=131, masterSystemTime=1689426950054 2023-07-15 13:15:50,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:50,068 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:50,068 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=7cbf6b928f4a5822d479aa1ed8d58489, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:50,068 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689426950068"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426950068"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426950068"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426950068"}]},"ts":"1689426950068"} 2023-07-15 13:15:50,071 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-15 13:15:50,071 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 7cbf6b928f4a5822d479aa1ed8d58489, server=jenkins-hbase4.apache.org,38761,1689426926503 in 167 msec 2023-07-15 13:15:50,072 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=7cbf6b928f4a5822d479aa1ed8d58489, REOPEN/MOVE in 488 msec 2023-07-15 13:15:50,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-15 13:15:50,591 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-15 13:15:50,591 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:50,595 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:50,596 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:50,598 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:50,599 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-15 13:15:50,599 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 13:15:50,600 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-15 13:15:50,600 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:50,601 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-15 13:15:50,602 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 13:15:50,603 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-15 13:15:50,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-15 13:15:50,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:50,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:50,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-15 13:15:50,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-15 13:15:50,611 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-15 13:15:50,618 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:50,618 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:50,621 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-15 13:15:50,621 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:50,622 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-15 13:15:50,622 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 13:15:50,623 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-15 13:15:50,623 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 13:15:50,629 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:50,629 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:50,631 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-15 13:15:50,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-15 13:15:50,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:50,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:50,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-15 13:15:50,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 13:15:50,643 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-15 13:15:50,643 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(345): Moving region 7cbf6b928f4a5822d479aa1ed8d58489 to RSGroup default 2023-07-15 13:15:50,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=7cbf6b928f4a5822d479aa1ed8d58489, REOPEN/MOVE 2023-07-15 13:15:50,644 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-15 13:15:50,644 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=7cbf6b928f4a5822d479aa1ed8d58489, REOPEN/MOVE 2023-07-15 13:15:50,644 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=7cbf6b928f4a5822d479aa1ed8d58489, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:50,645 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689426950644"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426950644"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426950644"}]},"ts":"1689426950644"} 2023-07-15 13:15:50,646 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE; CloseRegionProcedure 7cbf6b928f4a5822d479aa1ed8d58489, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:50,799 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:50,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7cbf6b928f4a5822d479aa1ed8d58489, disabling compactions & flushes 2023-07-15 13:15:50,800 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:50,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:50,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. after waiting 0 ms 2023-07-15 13:15:50,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:50,805 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 13:15:50,806 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:50,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7cbf6b928f4a5822d479aa1ed8d58489: 2023-07-15 13:15:50,806 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7cbf6b928f4a5822d479aa1ed8d58489 move to jenkins-hbase4.apache.org,44807,1689426930103 record at close sequenceid=5 2023-07-15 13:15:50,808 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:50,808 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=7cbf6b928f4a5822d479aa1ed8d58489, regionState=CLOSED 2023-07-15 13:15:50,808 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689426950808"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426950808"}]},"ts":"1689426950808"} 2023-07-15 13:15:50,811 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=132 2023-07-15 13:15:50,811 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; CloseRegionProcedure 7cbf6b928f4a5822d479aa1ed8d58489, server=jenkins-hbase4.apache.org,38761,1689426926503 in 164 msec 2023-07-15 13:15:50,812 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=7cbf6b928f4a5822d479aa1ed8d58489, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44807,1689426930103; forceNewPlan=false, retain=false 2023-07-15 13:15:50,963 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=7cbf6b928f4a5822d479aa1ed8d58489, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:50,963 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689426950963"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426950963"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426950963"}]},"ts":"1689426950963"} 2023-07-15 13:15:50,964 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=132, state=RUNNABLE; OpenRegionProcedure 7cbf6b928f4a5822d479aa1ed8d58489, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:51,064 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-15 13:15:51,120 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:51,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7cbf6b928f4a5822d479aa1ed8d58489, NAME => 'unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:51,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:51,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:51,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:51,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:51,122 INFO [StoreOpener-7cbf6b928f4a5822d479aa1ed8d58489-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:51,124 DEBUG [StoreOpener-7cbf6b928f4a5822d479aa1ed8d58489-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489/ut 2023-07-15 13:15:51,124 DEBUG [StoreOpener-7cbf6b928f4a5822d479aa1ed8d58489-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489/ut 2023-07-15 13:15:51,124 INFO [StoreOpener-7cbf6b928f4a5822d479aa1ed8d58489-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7cbf6b928f4a5822d479aa1ed8d58489 columnFamilyName ut 2023-07-15 13:15:51,125 INFO [StoreOpener-7cbf6b928f4a5822d479aa1ed8d58489-1] regionserver.HStore(310): Store=7cbf6b928f4a5822d479aa1ed8d58489/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:51,126 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:51,127 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:51,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:51,131 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7cbf6b928f4a5822d479aa1ed8d58489; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10574336480, jitterRate=-0.015188172459602356}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:51,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7cbf6b928f4a5822d479aa1ed8d58489: 2023-07-15 13:15:51,132 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489., pid=134, masterSystemTime=1689426951116 2023-07-15 13:15:51,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:51,134 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:51,134 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=7cbf6b928f4a5822d479aa1ed8d58489, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:51,135 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689426951134"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426951134"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426951134"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426951134"}]},"ts":"1689426951134"} 2023-07-15 13:15:51,140 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-15 13:15:51,140 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; OpenRegionProcedure 7cbf6b928f4a5822d479aa1ed8d58489, server=jenkins-hbase4.apache.org,44807,1689426930103 in 172 msec 2023-07-15 13:15:51,141 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=7cbf6b928f4a5822d479aa1ed8d58489, REOPEN/MOVE in 497 msec 2023-07-15 13:15:51,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure.ProcedureSyncWait(216): waitFor pid=132 2023-07-15 13:15:51,644 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-15 13:15:51,644 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:51,645 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38761] to rsgroup default 2023-07-15 13:15:51,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-15 13:15:51,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:51,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:51,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-15 13:15:51,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 13:15:51,650 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-15 13:15:51,650 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38761,1689426926503] are moved back to normal 2023-07-15 13:15:51,650 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-15 13:15:51,650 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:51,651 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-15 13:15:51,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:51,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:51,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-15 13:15:51,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-15 13:15:51,657 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:51,658 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:51,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:51,658 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:51,659 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:15:51,659 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:51,660 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:15:51,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:51,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-15 13:15:51,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-15 13:15:51,665 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:51,667 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-15 13:15:51,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:51,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-15 13:15:51,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:51,672 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-15 13:15:51,672 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(345): Moving region d8243996734f4532a436ec0fa187e59a to RSGroup default 2023-07-15 13:15:51,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=135, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=d8243996734f4532a436ec0fa187e59a, REOPEN/MOVE 2023-07-15 13:15:51,673 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-15 13:15:51,673 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=d8243996734f4532a436ec0fa187e59a, REOPEN/MOVE 2023-07-15 13:15:51,674 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=d8243996734f4532a436ec0fa187e59a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:51,674 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689426951674"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426951674"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426951674"}]},"ts":"1689426951674"} 2023-07-15 13:15:51,675 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=135, state=RUNNABLE; CloseRegionProcedure d8243996734f4532a436ec0fa187e59a, server=jenkins-hbase4.apache.org,37679,1689426926099}] 2023-07-15 13:15:51,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:51,829 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d8243996734f4532a436ec0fa187e59a, disabling compactions & flushes 2023-07-15 13:15:51,829 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:51,829 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:51,829 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. after waiting 0 ms 2023-07-15 13:15:51,829 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:51,833 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/testRename/d8243996734f4532a436ec0fa187e59a/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 13:15:51,835 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:51,835 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d8243996734f4532a436ec0fa187e59a: 2023-07-15 13:15:51,835 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding d8243996734f4532a436ec0fa187e59a move to jenkins-hbase4.apache.org,38761,1689426926503 record at close sequenceid=5 2023-07-15 13:15:51,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:51,843 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=d8243996734f4532a436ec0fa187e59a, regionState=CLOSED 2023-07-15 13:15:51,843 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689426951843"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426951843"}]},"ts":"1689426951843"} 2023-07-15 13:15:51,848 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=135 2023-07-15 13:15:51,848 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=135, state=SUCCESS; CloseRegionProcedure d8243996734f4532a436ec0fa187e59a, server=jenkins-hbase4.apache.org,37679,1689426926099 in 170 msec 2023-07-15 13:15:51,849 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=d8243996734f4532a436ec0fa187e59a, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38761,1689426926503; forceNewPlan=false, retain=false 2023-07-15 13:15:52,000 INFO [jenkins-hbase4:40693] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 13:15:52,000 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=d8243996734f4532a436ec0fa187e59a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:52,000 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689426952000"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426952000"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426952000"}]},"ts":"1689426952000"} 2023-07-15 13:15:52,002 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=137, ppid=135, state=RUNNABLE; OpenRegionProcedure d8243996734f4532a436ec0fa187e59a, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:52,157 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:52,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d8243996734f4532a436ec0fa187e59a, NAME => 'testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:52,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:52,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:52,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:52,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:52,160 INFO [StoreOpener-d8243996734f4532a436ec0fa187e59a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:52,161 DEBUG [StoreOpener-d8243996734f4532a436ec0fa187e59a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/testRename/d8243996734f4532a436ec0fa187e59a/tr 2023-07-15 13:15:52,161 DEBUG [StoreOpener-d8243996734f4532a436ec0fa187e59a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/testRename/d8243996734f4532a436ec0fa187e59a/tr 2023-07-15 13:15:52,161 INFO [StoreOpener-d8243996734f4532a436ec0fa187e59a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d8243996734f4532a436ec0fa187e59a columnFamilyName tr 2023-07-15 13:15:52,162 INFO [StoreOpener-d8243996734f4532a436ec0fa187e59a-1] regionserver.HStore(310): Store=d8243996734f4532a436ec0fa187e59a/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:52,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/testRename/d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:52,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/testRename/d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:52,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:52,168 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d8243996734f4532a436ec0fa187e59a; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10001813760, jitterRate=-0.06850850582122803}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:52,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d8243996734f4532a436ec0fa187e59a: 2023-07-15 13:15:52,168 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a., pid=137, masterSystemTime=1689426952154 2023-07-15 13:15:52,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:52,170 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:52,170 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=d8243996734f4532a436ec0fa187e59a, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:52,171 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689426952170"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426952170"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426952170"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426952170"}]},"ts":"1689426952170"} 2023-07-15 13:15:52,173 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=135 2023-07-15 13:15:52,173 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=135, state=SUCCESS; OpenRegionProcedure d8243996734f4532a436ec0fa187e59a, server=jenkins-hbase4.apache.org,38761,1689426926503 in 170 msec 2023-07-15 13:15:52,174 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=135, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=d8243996734f4532a436ec0fa187e59a, REOPEN/MOVE in 501 msec 2023-07-15 13:15:52,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure.ProcedureSyncWait(216): waitFor pid=135 2023-07-15 13:15:52,673 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-15 13:15:52,673 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:52,674 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679] to rsgroup default 2023-07-15 13:15:52,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:52,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-15 13:15:52,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:52,678 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-15 13:15:52,678 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34837,1689426926314, jenkins-hbase4.apache.org,37679,1689426926099] are moved back to newgroup 2023-07-15 13:15:52,679 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-15 13:15:52,679 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:52,679 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-15 13:15:52,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:52,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:15:52,688 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:52,691 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:15:52,691 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:15:52,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:52,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:52,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:52,697 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:52,700 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:52,700 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:52,702 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40693] to rsgroup master 2023-07-15 13:15:52,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:52,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 765 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36536 deadline: 1689428152702, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. 2023-07-15 13:15:52,702 WARN [Listener at localhost/38739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:15:52,704 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:52,705 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:52,705 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:52,705 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:44807], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:15:52,706 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:52,706 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:52,726 INFO [Listener at localhost/38739] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=520 (was 526), OpenFileDescriptor=796 (was 815), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=329 (was 361), ProcessCount=170 (was 172), AvailableMemoryMB=5230 (was 3209) - AvailableMemoryMB LEAK? - 2023-07-15 13:15:52,727 WARN [Listener at localhost/38739] hbase.ResourceChecker(130): Thread=520 is superior to 500 2023-07-15 13:15:52,745 INFO [Listener at localhost/38739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=520, OpenFileDescriptor=796, MaxFileDescriptor=60000, SystemLoadAverage=329, ProcessCount=170, AvailableMemoryMB=5230 2023-07-15 13:15:52,745 WARN [Listener at localhost/38739] hbase.ResourceChecker(130): Thread=520 is superior to 500 2023-07-15 13:15:52,745 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-15 13:15:52,749 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:52,749 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:52,750 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:52,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:52,750 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:52,751 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:15:52,751 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:52,752 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:15:52,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:52,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:15:52,757 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:52,759 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:15:52,760 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:15:52,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:52,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:52,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:52,765 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:52,768 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:52,768 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:52,770 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40693] to rsgroup master 2023-07-15 13:15:52,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:52,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 793 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36536 deadline: 1689428152770, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. 2023-07-15 13:15:52,770 WARN [Listener at localhost/38739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:15:52,772 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:52,773 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:52,773 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:52,773 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:44807], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:15:52,774 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:52,774 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:52,775 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-15 13:15:52,775 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 13:15:52,782 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-15 13:15:52,782 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-15 13:15:52,783 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-15 13:15:52,783 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:52,783 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-15 13:15:52,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:52,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 805 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:36536 deadline: 1689428152783, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-15 13:15:52,786 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-15 13:15:52,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:52,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 808 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:36536 deadline: 1689428152785, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-15 13:15:52,788 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-15 13:15:52,788 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-15 13:15:52,793 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-15 13:15:52,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:52,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 812 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:36536 deadline: 1689428152792, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-15 13:15:52,797 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:52,797 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:52,798 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:52,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:52,798 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:52,799 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:15:52,799 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:52,799 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:15:52,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:52,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:15:52,804 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:52,806 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:15:52,807 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:15:52,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:52,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:52,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:52,812 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:52,815 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:52,815 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:52,817 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40693] to rsgroup master 2023-07-15 13:15:52,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:52,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 836 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36536 deadline: 1689428152817, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. 2023-07-15 13:15:52,820 WARN [Listener at localhost/38739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:15:52,821 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:52,822 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:52,822 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:52,822 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:44807], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:15:52,822 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:52,823 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:52,838 INFO [Listener at localhost/38739] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=523 (was 520) Potentially hanging thread: hconnection-0x412f866-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x412f866-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x22934466-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=796 (was 796), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=329 (was 329), ProcessCount=170 (was 170), AvailableMemoryMB=5230 (was 5230) 2023-07-15 13:15:52,839 WARN [Listener at localhost/38739] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-15 13:15:52,854 INFO [Listener at localhost/38739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=523, OpenFileDescriptor=796, MaxFileDescriptor=60000, SystemLoadAverage=329, ProcessCount=170, AvailableMemoryMB=5230 2023-07-15 13:15:52,854 WARN [Listener at localhost/38739] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-15 13:15:52,854 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-15 13:15:52,858 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:52,858 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:52,859 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:52,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:52,859 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:52,860 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:15:52,860 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:52,861 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:15:52,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:52,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:15:52,866 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:52,869 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:15:52,870 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:15:52,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:52,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:52,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:52,880 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:52,883 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-15 13:15:52,883 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:52,883 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:52,886 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40693] to rsgroup master 2023-07-15 13:15:52,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:52,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 864 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36536 deadline: 1689428152886, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. 2023-07-15 13:15:52,887 WARN [Listener at localhost/38739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:15:52,890 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:52,890 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:52,891 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:52,891 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:44807], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:15:52,892 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:52,892 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:52,893 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:52,893 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:52,894 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_465041494 2023-07-15 13:15:52,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_465041494 2023-07-15 13:15:52,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:52,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:52,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:52,899 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:52,902 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:52,902 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:52,905 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679] to rsgroup Group_testDisabledTableMove_465041494 2023-07-15 13:15:52,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_465041494 2023-07-15 13:15:52,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:52,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:52,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:52,909 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-15 13:15:52,909 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34837,1689426926314, jenkins-hbase4.apache.org,37679,1689426926099] are moved back to default 2023-07-15 13:15:52,909 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_465041494 2023-07-15 13:15:52,909 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:52,911 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:52,911 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:52,913 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_465041494 2023-07-15 13:15:52,913 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:52,914 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:52,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=138, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-15 13:15:52,917 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=138, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:15:52,917 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 138 2023-07-15 13:15:52,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=138 2023-07-15 13:15:52,919 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_465041494 2023-07-15 13:15:52,919 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:52,919 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:52,919 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:52,922 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=138, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 13:15:52,926 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/761ecc3594527a5a1867acde1650b342 2023-07-15 13:15:52,926 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/3e7bb26803ce3014b652ebb2a595bfe2 2023-07-15 13:15:52,926 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/feff9d42b135ef22249348477a9d4e66 2023-07-15 13:15:52,926 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/155470c7e6fb544498395c5df80b7f1f 2023-07-15 13:15:52,926 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/92051405885c311e975d76c343c538ed 2023-07-15 13:15:52,927 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/761ecc3594527a5a1867acde1650b342 empty. 2023-07-15 13:15:52,927 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/3e7bb26803ce3014b652ebb2a595bfe2 empty. 2023-07-15 13:15:52,927 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/92051405885c311e975d76c343c538ed empty. 2023-07-15 13:15:52,927 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/feff9d42b135ef22249348477a9d4e66 empty. 2023-07-15 13:15:52,927 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/155470c7e6fb544498395c5df80b7f1f empty. 2023-07-15 13:15:52,927 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/761ecc3594527a5a1867acde1650b342 2023-07-15 13:15:52,927 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/3e7bb26803ce3014b652ebb2a595bfe2 2023-07-15 13:15:52,927 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/feff9d42b135ef22249348477a9d4e66 2023-07-15 13:15:52,927 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/92051405885c311e975d76c343c538ed 2023-07-15 13:15:52,928 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/155470c7e6fb544498395c5df80b7f1f 2023-07-15 13:15:52,928 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-15 13:15:52,939 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:52,941 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 761ecc3594527a5a1867acde1650b342, NAME => 'Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:52,941 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 3e7bb26803ce3014b652ebb2a595bfe2, NAME => 'Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:52,941 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 92051405885c311e975d76c343c538ed, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:52,968 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:52,968 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 3e7bb26803ce3014b652ebb2a595bfe2, disabling compactions & flushes 2023-07-15 13:15:52,968 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2. 2023-07-15 13:15:52,968 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2. 2023-07-15 13:15:52,968 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2. after waiting 0 ms 2023-07-15 13:15:52,968 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2. 2023-07-15 13:15:52,968 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2. 2023-07-15 13:15:52,968 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 3e7bb26803ce3014b652ebb2a595bfe2: 2023-07-15 13:15:52,969 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 155470c7e6fb544498395c5df80b7f1f, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:52,970 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:52,970 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 761ecc3594527a5a1867acde1650b342, disabling compactions & flushes 2023-07-15 13:15:52,970 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342. 2023-07-15 13:15:52,970 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342. 2023-07-15 13:15:52,970 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342. after waiting 0 ms 2023-07-15 13:15:52,970 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342. 2023-07-15 13:15:52,970 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342. 2023-07-15 13:15:52,970 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 761ecc3594527a5a1867acde1650b342: 2023-07-15 13:15:52,971 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => feff9d42b135ef22249348477a9d4e66, NAME => 'Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp 2023-07-15 13:15:52,982 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:52,982 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 92051405885c311e975d76c343c538ed, disabling compactions & flushes 2023-07-15 13:15:52,982 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed. 2023-07-15 13:15:52,982 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed. 2023-07-15 13:15:52,982 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed. after waiting 0 ms 2023-07-15 13:15:52,982 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed. 2023-07-15 13:15:52,982 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed. 2023-07-15 13:15:52,982 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 92051405885c311e975d76c343c538ed: 2023-07-15 13:15:53,003 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:53,003 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:53,003 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 155470c7e6fb544498395c5df80b7f1f, disabling compactions & flushes 2023-07-15 13:15:53,003 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing feff9d42b135ef22249348477a9d4e66, disabling compactions & flushes 2023-07-15 13:15:53,003 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f. 2023-07-15 13:15:53,003 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66. 2023-07-15 13:15:53,003 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66. 2023-07-15 13:15:53,003 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f. 2023-07-15 13:15:53,003 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66. after waiting 0 ms 2023-07-15 13:15:53,003 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66. 2023-07-15 13:15:53,003 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f. after waiting 0 ms 2023-07-15 13:15:53,003 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f. 2023-07-15 13:15:53,003 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66. 2023-07-15 13:15:53,003 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for feff9d42b135ef22249348477a9d4e66: 2023-07-15 13:15:53,003 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f. 2023-07-15 13:15:53,003 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 155470c7e6fb544498395c5df80b7f1f: 2023-07-15 13:15:53,006 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=138, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 13:15:53,007 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689426953007"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426953007"}]},"ts":"1689426953007"} 2023-07-15 13:15:53,007 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689426953007"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426953007"}]},"ts":"1689426953007"} 2023-07-15 13:15:53,007 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689426952914.92051405885c311e975d76c343c538ed.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689426953007"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426953007"}]},"ts":"1689426953007"} 2023-07-15 13:15:53,007 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689426953007"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426953007"}]},"ts":"1689426953007"} 2023-07-15 13:15:53,007 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689426953007"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426953007"}]},"ts":"1689426953007"} 2023-07-15 13:15:53,009 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-15 13:15:53,010 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=138, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 13:15:53,010 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426953010"}]},"ts":"1689426953010"} 2023-07-15 13:15:53,011 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-15 13:15:53,014 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:53,014 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:53,014 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:53,014 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:53,014 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=761ecc3594527a5a1867acde1650b342, ASSIGN}, {pid=140, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3e7bb26803ce3014b652ebb2a595bfe2, ASSIGN}, {pid=141, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=92051405885c311e975d76c343c538ed, ASSIGN}, {pid=142, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=155470c7e6fb544498395c5df80b7f1f, ASSIGN}, {pid=143, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=feff9d42b135ef22249348477a9d4e66, ASSIGN}] 2023-07-15 13:15:53,016 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=141, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=92051405885c311e975d76c343c538ed, ASSIGN 2023-07-15 13:15:53,016 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3e7bb26803ce3014b652ebb2a595bfe2, ASSIGN 2023-07-15 13:15:53,016 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=142, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=155470c7e6fb544498395c5df80b7f1f, ASSIGN 2023-07-15 13:15:53,017 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=761ecc3594527a5a1867acde1650b342, ASSIGN 2023-07-15 13:15:53,017 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=141, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=92051405885c311e975d76c343c538ed, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44807,1689426930103; forceNewPlan=false, retain=false 2023-07-15 13:15:53,017 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=143, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=feff9d42b135ef22249348477a9d4e66, ASSIGN 2023-07-15 13:15:53,017 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=140, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3e7bb26803ce3014b652ebb2a595bfe2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44807,1689426930103; forceNewPlan=false, retain=false 2023-07-15 13:15:53,017 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=139, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=761ecc3594527a5a1867acde1650b342, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38761,1689426926503; forceNewPlan=false, retain=false 2023-07-15 13:15:53,017 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=142, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=155470c7e6fb544498395c5df80b7f1f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44807,1689426930103; forceNewPlan=false, retain=false 2023-07-15 13:15:53,018 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=143, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=feff9d42b135ef22249348477a9d4e66, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38761,1689426926503; forceNewPlan=false, retain=false 2023-07-15 13:15:53,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=138 2023-07-15 13:15:53,167 INFO [jenkins-hbase4:40693] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-15 13:15:53,171 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=155470c7e6fb544498395c5df80b7f1f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:53,171 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=761ecc3594527a5a1867acde1650b342, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:53,171 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689426953171"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426953171"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426953171"}]},"ts":"1689426953171"} 2023-07-15 13:15:53,171 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=3e7bb26803ce3014b652ebb2a595bfe2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:53,171 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689426953171"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426953171"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426953171"}]},"ts":"1689426953171"} 2023-07-15 13:15:53,171 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689426953171"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426953171"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426953171"}]},"ts":"1689426953171"} 2023-07-15 13:15:53,171 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=92051405885c311e975d76c343c538ed, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:53,171 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=feff9d42b135ef22249348477a9d4e66, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:53,172 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689426952914.92051405885c311e975d76c343c538ed.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689426953171"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426953171"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426953171"}]},"ts":"1689426953171"} 2023-07-15 13:15:53,172 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689426953171"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426953171"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426953171"}]},"ts":"1689426953171"} 2023-07-15 13:15:53,173 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=142, state=RUNNABLE; OpenRegionProcedure 155470c7e6fb544498395c5df80b7f1f, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:53,173 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=139, state=RUNNABLE; OpenRegionProcedure 761ecc3594527a5a1867acde1650b342, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:53,174 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=146, ppid=140, state=RUNNABLE; OpenRegionProcedure 3e7bb26803ce3014b652ebb2a595bfe2, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:53,179 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=141, state=RUNNABLE; OpenRegionProcedure 92051405885c311e975d76c343c538ed, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:53,179 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=148, ppid=143, state=RUNNABLE; OpenRegionProcedure feff9d42b135ef22249348477a9d4e66, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:53,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=138 2023-07-15 13:15:53,329 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2. 2023-07-15 13:15:53,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3e7bb26803ce3014b652ebb2a595bfe2, NAME => 'Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-15 13:15:53,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 3e7bb26803ce3014b652ebb2a595bfe2 2023-07-15 13:15:53,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:53,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3e7bb26803ce3014b652ebb2a595bfe2 2023-07-15 13:15:53,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3e7bb26803ce3014b652ebb2a595bfe2 2023-07-15 13:15:53,331 INFO [StoreOpener-3e7bb26803ce3014b652ebb2a595bfe2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3e7bb26803ce3014b652ebb2a595bfe2 2023-07-15 13:15:53,332 DEBUG [StoreOpener-3e7bb26803ce3014b652ebb2a595bfe2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/3e7bb26803ce3014b652ebb2a595bfe2/f 2023-07-15 13:15:53,333 DEBUG [StoreOpener-3e7bb26803ce3014b652ebb2a595bfe2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/3e7bb26803ce3014b652ebb2a595bfe2/f 2023-07-15 13:15:53,333 INFO [StoreOpener-3e7bb26803ce3014b652ebb2a595bfe2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3e7bb26803ce3014b652ebb2a595bfe2 columnFamilyName f 2023-07-15 13:15:53,334 INFO [StoreOpener-3e7bb26803ce3014b652ebb2a595bfe2-1] regionserver.HStore(310): Store=3e7bb26803ce3014b652ebb2a595bfe2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:53,334 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66. 2023-07-15 13:15:53,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => feff9d42b135ef22249348477a9d4e66, NAME => 'Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-15 13:15:53,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/3e7bb26803ce3014b652ebb2a595bfe2 2023-07-15 13:15:53,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove feff9d42b135ef22249348477a9d4e66 2023-07-15 13:15:53,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:53,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for feff9d42b135ef22249348477a9d4e66 2023-07-15 13:15:53,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for feff9d42b135ef22249348477a9d4e66 2023-07-15 13:15:53,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/3e7bb26803ce3014b652ebb2a595bfe2 2023-07-15 13:15:53,336 INFO [StoreOpener-feff9d42b135ef22249348477a9d4e66-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region feff9d42b135ef22249348477a9d4e66 2023-07-15 13:15:53,337 DEBUG [StoreOpener-feff9d42b135ef22249348477a9d4e66-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/feff9d42b135ef22249348477a9d4e66/f 2023-07-15 13:15:53,337 DEBUG [StoreOpener-feff9d42b135ef22249348477a9d4e66-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/feff9d42b135ef22249348477a9d4e66/f 2023-07-15 13:15:53,338 INFO [StoreOpener-feff9d42b135ef22249348477a9d4e66-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region feff9d42b135ef22249348477a9d4e66 columnFamilyName f 2023-07-15 13:15:53,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3e7bb26803ce3014b652ebb2a595bfe2 2023-07-15 13:15:53,338 INFO [StoreOpener-feff9d42b135ef22249348477a9d4e66-1] regionserver.HStore(310): Store=feff9d42b135ef22249348477a9d4e66/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:53,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/feff9d42b135ef22249348477a9d4e66 2023-07-15 13:15:53,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/feff9d42b135ef22249348477a9d4e66 2023-07-15 13:15:53,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/3e7bb26803ce3014b652ebb2a595bfe2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:53,341 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3e7bb26803ce3014b652ebb2a595bfe2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10678806080, jitterRate=-0.0054586827754974365}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:53,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3e7bb26803ce3014b652ebb2a595bfe2: 2023-07-15 13:15:53,341 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2., pid=146, masterSystemTime=1689426953325 2023-07-15 13:15:53,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for feff9d42b135ef22249348477a9d4e66 2023-07-15 13:15:53,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2. 2023-07-15 13:15:53,343 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2. 2023-07-15 13:15:53,343 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f. 2023-07-15 13:15:53,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 155470c7e6fb544498395c5df80b7f1f, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-15 13:15:53,343 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=3e7bb26803ce3014b652ebb2a595bfe2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:53,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 155470c7e6fb544498395c5df80b7f1f 2023-07-15 13:15:53,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:53,344 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689426953343"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426953343"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426953343"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426953343"}]},"ts":"1689426953343"} 2023-07-15 13:15:53,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 155470c7e6fb544498395c5df80b7f1f 2023-07-15 13:15:53,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 155470c7e6fb544498395c5df80b7f1f 2023-07-15 13:15:53,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/feff9d42b135ef22249348477a9d4e66/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:53,345 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened feff9d42b135ef22249348477a9d4e66; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10904835840, jitterRate=0.015591979026794434}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:53,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for feff9d42b135ef22249348477a9d4e66: 2023-07-15 13:15:53,345 INFO [StoreOpener-155470c7e6fb544498395c5df80b7f1f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 155470c7e6fb544498395c5df80b7f1f 2023-07-15 13:15:53,346 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66., pid=148, masterSystemTime=1689426953331 2023-07-15 13:15:53,346 DEBUG [StoreOpener-155470c7e6fb544498395c5df80b7f1f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/155470c7e6fb544498395c5df80b7f1f/f 2023-07-15 13:15:53,347 DEBUG [StoreOpener-155470c7e6fb544498395c5df80b7f1f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/155470c7e6fb544498395c5df80b7f1f/f 2023-07-15 13:15:53,347 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=140 2023-07-15 13:15:53,347 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=140, state=SUCCESS; OpenRegionProcedure 3e7bb26803ce3014b652ebb2a595bfe2, server=jenkins-hbase4.apache.org,44807,1689426930103 in 171 msec 2023-07-15 13:15:53,347 INFO [StoreOpener-155470c7e6fb544498395c5df80b7f1f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 155470c7e6fb544498395c5df80b7f1f columnFamilyName f 2023-07-15 13:15:53,347 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66. 2023-07-15 13:15:53,347 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66. 2023-07-15 13:15:53,347 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342. 2023-07-15 13:15:53,347 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=feff9d42b135ef22249348477a9d4e66, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:53,347 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 761ecc3594527a5a1867acde1650b342, NAME => 'Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-15 13:15:53,348 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689426953347"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426953347"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426953347"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426953347"}]},"ts":"1689426953347"} 2023-07-15 13:15:53,348 INFO [StoreOpener-155470c7e6fb544498395c5df80b7f1f-1] regionserver.HStore(310): Store=155470c7e6fb544498395c5df80b7f1f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:53,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 761ecc3594527a5a1867acde1650b342 2023-07-15 13:15:53,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:53,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 761ecc3594527a5a1867acde1650b342 2023-07-15 13:15:53,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 761ecc3594527a5a1867acde1650b342 2023-07-15 13:15:53,348 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=138, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3e7bb26803ce3014b652ebb2a595bfe2, ASSIGN in 333 msec 2023-07-15 13:15:53,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/155470c7e6fb544498395c5df80b7f1f 2023-07-15 13:15:53,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/155470c7e6fb544498395c5df80b7f1f 2023-07-15 13:15:53,349 INFO [StoreOpener-761ecc3594527a5a1867acde1650b342-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 761ecc3594527a5a1867acde1650b342 2023-07-15 13:15:53,350 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=143 2023-07-15 13:15:53,351 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; OpenRegionProcedure feff9d42b135ef22249348477a9d4e66, server=jenkins-hbase4.apache.org,38761,1689426926503 in 170 msec 2023-07-15 13:15:53,351 DEBUG [StoreOpener-761ecc3594527a5a1867acde1650b342-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/761ecc3594527a5a1867acde1650b342/f 2023-07-15 13:15:53,351 DEBUG [StoreOpener-761ecc3594527a5a1867acde1650b342-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/761ecc3594527a5a1867acde1650b342/f 2023-07-15 13:15:53,351 INFO [StoreOpener-761ecc3594527a5a1867acde1650b342-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 761ecc3594527a5a1867acde1650b342 columnFamilyName f 2023-07-15 13:15:53,352 INFO [StoreOpener-761ecc3594527a5a1867acde1650b342-1] regionserver.HStore(310): Store=761ecc3594527a5a1867acde1650b342/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:53,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 155470c7e6fb544498395c5df80b7f1f 2023-07-15 13:15:53,352 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=138, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=feff9d42b135ef22249348477a9d4e66, ASSIGN in 336 msec 2023-07-15 13:15:53,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/761ecc3594527a5a1867acde1650b342 2023-07-15 13:15:53,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/761ecc3594527a5a1867acde1650b342 2023-07-15 13:15:53,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/155470c7e6fb544498395c5df80b7f1f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:53,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 761ecc3594527a5a1867acde1650b342 2023-07-15 13:15:53,356 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 155470c7e6fb544498395c5df80b7f1f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11866165600, jitterRate=0.10512278974056244}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:53,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 155470c7e6fb544498395c5df80b7f1f: 2023-07-15 13:15:53,357 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f., pid=144, masterSystemTime=1689426953325 2023-07-15 13:15:53,358 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/761ecc3594527a5a1867acde1650b342/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:53,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f. 2023-07-15 13:15:53,359 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f. 2023-07-15 13:15:53,359 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=155470c7e6fb544498395c5df80b7f1f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:53,359 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 761ecc3594527a5a1867acde1650b342; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11015019520, jitterRate=0.025853633880615234}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:53,359 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed. 2023-07-15 13:15:53,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 761ecc3594527a5a1867acde1650b342: 2023-07-15 13:15:53,359 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689426953359"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426953359"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426953359"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426953359"}]},"ts":"1689426953359"} 2023-07-15 13:15:53,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 92051405885c311e975d76c343c538ed, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-15 13:15:53,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 92051405885c311e975d76c343c538ed 2023-07-15 13:15:53,360 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:53,360 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342., pid=145, masterSystemTime=1689426953331 2023-07-15 13:15:53,360 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 92051405885c311e975d76c343c538ed 2023-07-15 13:15:53,360 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 92051405885c311e975d76c343c538ed 2023-07-15 13:15:53,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342. 2023-07-15 13:15:53,361 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342. 2023-07-15 13:15:53,362 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=761ecc3594527a5a1867acde1650b342, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:53,362 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689426953361"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426953361"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426953361"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426953361"}]},"ts":"1689426953361"} 2023-07-15 13:15:53,362 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=142 2023-07-15 13:15:53,362 INFO [StoreOpener-92051405885c311e975d76c343c538ed-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 92051405885c311e975d76c343c538ed 2023-07-15 13:15:53,362 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=142, state=SUCCESS; OpenRegionProcedure 155470c7e6fb544498395c5df80b7f1f, server=jenkins-hbase4.apache.org,44807,1689426930103 in 187 msec 2023-07-15 13:15:53,364 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=138, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=155470c7e6fb544498395c5df80b7f1f, ASSIGN in 348 msec 2023-07-15 13:15:53,364 DEBUG [StoreOpener-92051405885c311e975d76c343c538ed-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/92051405885c311e975d76c343c538ed/f 2023-07-15 13:15:53,364 DEBUG [StoreOpener-92051405885c311e975d76c343c538ed-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/92051405885c311e975d76c343c538ed/f 2023-07-15 13:15:53,365 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=139 2023-07-15 13:15:53,365 INFO [StoreOpener-92051405885c311e975d76c343c538ed-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 92051405885c311e975d76c343c538ed columnFamilyName f 2023-07-15 13:15:53,365 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=139, state=SUCCESS; OpenRegionProcedure 761ecc3594527a5a1867acde1650b342, server=jenkins-hbase4.apache.org,38761,1689426926503 in 190 msec 2023-07-15 13:15:53,366 INFO [StoreOpener-92051405885c311e975d76c343c538ed-1] regionserver.HStore(310): Store=92051405885c311e975d76c343c538ed/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:53,366 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/92051405885c311e975d76c343c538ed 2023-07-15 13:15:53,367 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/92051405885c311e975d76c343c538ed 2023-07-15 13:15:53,370 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 92051405885c311e975d76c343c538ed 2023-07-15 13:15:53,370 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=138, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=761ecc3594527a5a1867acde1650b342, ASSIGN in 351 msec 2023-07-15 13:15:53,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/92051405885c311e975d76c343c538ed/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:53,372 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 92051405885c311e975d76c343c538ed; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10671586080, jitterRate=-0.006131097674369812}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:53,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 92051405885c311e975d76c343c538ed: 2023-07-15 13:15:53,373 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed., pid=147, masterSystemTime=1689426953325 2023-07-15 13:15:53,374 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed. 2023-07-15 13:15:53,374 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed. 2023-07-15 13:15:53,374 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=92051405885c311e975d76c343c538ed, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:53,374 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689426952914.92051405885c311e975d76c343c538ed.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689426953374"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426953374"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426953374"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426953374"}]},"ts":"1689426953374"} 2023-07-15 13:15:53,377 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=141 2023-07-15 13:15:53,377 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=141, state=SUCCESS; OpenRegionProcedure 92051405885c311e975d76c343c538ed, server=jenkins-hbase4.apache.org,44807,1689426930103 in 200 msec 2023-07-15 13:15:53,378 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=138 2023-07-15 13:15:53,379 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=138, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=92051405885c311e975d76c343c538ed, ASSIGN in 363 msec 2023-07-15 13:15:53,379 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=138, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 13:15:53,379 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426953379"}]},"ts":"1689426953379"} 2023-07-15 13:15:53,380 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-15 13:15:53,382 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=138, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 13:15:53,383 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=138, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 468 msec 2023-07-15 13:15:53,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=138 2023-07-15 13:15:53,521 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 138 completed 2023-07-15 13:15:53,521 DEBUG [Listener at localhost/38739] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-15 13:15:53,521 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:53,526 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-15 13:15:53,526 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:53,527 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-15 13:15:53,527 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:53,535 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-15 13:15:53,535 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 13:15:53,536 INFO [Listener at localhost/38739] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-15 13:15:53,536 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-15 13:15:53,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=149, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-15 13:15:53,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=149 2023-07-15 13:15:53,540 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426953540"}]},"ts":"1689426953540"} 2023-07-15 13:15:53,541 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-15 13:15:53,544 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-15 13:15:53,544 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=761ecc3594527a5a1867acde1650b342, UNASSIGN}, {pid=151, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3e7bb26803ce3014b652ebb2a595bfe2, UNASSIGN}, {pid=152, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=92051405885c311e975d76c343c538ed, UNASSIGN}, {pid=153, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=155470c7e6fb544498395c5df80b7f1f, UNASSIGN}, {pid=154, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=feff9d42b135ef22249348477a9d4e66, UNASSIGN}] 2023-07-15 13:15:53,549 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=153, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=155470c7e6fb544498395c5df80b7f1f, UNASSIGN 2023-07-15 13:15:53,549 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=152, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=92051405885c311e975d76c343c538ed, UNASSIGN 2023-07-15 13:15:53,549 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=150, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=761ecc3594527a5a1867acde1650b342, UNASSIGN 2023-07-15 13:15:53,549 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=154, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=feff9d42b135ef22249348477a9d4e66, UNASSIGN 2023-07-15 13:15:53,549 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=151, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3e7bb26803ce3014b652ebb2a595bfe2, UNASSIGN 2023-07-15 13:15:53,550 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=153 updating hbase:meta row=155470c7e6fb544498395c5df80b7f1f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:53,550 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=152 updating hbase:meta row=92051405885c311e975d76c343c538ed, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:53,550 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689426953550"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426953550"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426953550"}]},"ts":"1689426953550"} 2023-07-15 13:15:53,550 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=3e7bb26803ce3014b652ebb2a595bfe2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:53,550 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=761ecc3594527a5a1867acde1650b342, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:53,550 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689426953550"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426953550"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426953550"}]},"ts":"1689426953550"} 2023-07-15 13:15:53,550 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=154 updating hbase:meta row=feff9d42b135ef22249348477a9d4e66, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:53,550 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689426953550"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426953550"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426953550"}]},"ts":"1689426953550"} 2023-07-15 13:15:53,550 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689426952914.92051405885c311e975d76c343c538ed.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689426953550"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426953550"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426953550"}]},"ts":"1689426953550"} 2023-07-15 13:15:53,550 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689426953550"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426953550"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426953550"}]},"ts":"1689426953550"} 2023-07-15 13:15:53,552 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=155, ppid=153, state=RUNNABLE; CloseRegionProcedure 155470c7e6fb544498395c5df80b7f1f, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:53,553 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=156, ppid=151, state=RUNNABLE; CloseRegionProcedure 3e7bb26803ce3014b652ebb2a595bfe2, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:53,553 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=157, ppid=150, state=RUNNABLE; CloseRegionProcedure 761ecc3594527a5a1867acde1650b342, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:53,555 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=158, ppid=152, state=RUNNABLE; CloseRegionProcedure 92051405885c311e975d76c343c538ed, server=jenkins-hbase4.apache.org,44807,1689426930103}] 2023-07-15 13:15:53,555 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=159, ppid=154, state=RUNNABLE; CloseRegionProcedure feff9d42b135ef22249348477a9d4e66, server=jenkins-hbase4.apache.org,38761,1689426926503}] 2023-07-15 13:15:53,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=149 2023-07-15 13:15:53,705 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 155470c7e6fb544498395c5df80b7f1f 2023-07-15 13:15:53,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 155470c7e6fb544498395c5df80b7f1f, disabling compactions & flushes 2023-07-15 13:15:53,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f. 2023-07-15 13:15:53,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f. 2023-07-15 13:15:53,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f. after waiting 0 ms 2023-07-15 13:15:53,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f. 2023-07-15 13:15:53,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close feff9d42b135ef22249348477a9d4e66 2023-07-15 13:15:53,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing feff9d42b135ef22249348477a9d4e66, disabling compactions & flushes 2023-07-15 13:15:53,708 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66. 2023-07-15 13:15:53,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66. 2023-07-15 13:15:53,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66. after waiting 0 ms 2023-07-15 13:15:53,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66. 2023-07-15 13:15:53,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/155470c7e6fb544498395c5df80b7f1f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:53,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/feff9d42b135ef22249348477a9d4e66/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:53,714 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f. 2023-07-15 13:15:53,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 155470c7e6fb544498395c5df80b7f1f: 2023-07-15 13:15:53,714 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66. 2023-07-15 13:15:53,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for feff9d42b135ef22249348477a9d4e66: 2023-07-15 13:15:53,716 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed feff9d42b135ef22249348477a9d4e66 2023-07-15 13:15:53,716 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 761ecc3594527a5a1867acde1650b342 2023-07-15 13:15:53,717 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 761ecc3594527a5a1867acde1650b342, disabling compactions & flushes 2023-07-15 13:15:53,717 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342. 2023-07-15 13:15:53,717 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342. 2023-07-15 13:15:53,717 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342. after waiting 0 ms 2023-07-15 13:15:53,717 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342. 2023-07-15 13:15:53,722 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/761ecc3594527a5a1867acde1650b342/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:53,723 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342. 2023-07-15 13:15:53,723 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 761ecc3594527a5a1867acde1650b342: 2023-07-15 13:15:53,724 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=154 updating hbase:meta row=feff9d42b135ef22249348477a9d4e66, regionState=CLOSED 2023-07-15 13:15:53,724 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689426953724"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426953724"}]},"ts":"1689426953724"} 2023-07-15 13:15:53,724 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 155470c7e6fb544498395c5df80b7f1f 2023-07-15 13:15:53,724 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3e7bb26803ce3014b652ebb2a595bfe2 2023-07-15 13:15:53,726 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3e7bb26803ce3014b652ebb2a595bfe2, disabling compactions & flushes 2023-07-15 13:15:53,726 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2. 2023-07-15 13:15:53,726 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2. 2023-07-15 13:15:53,726 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2. after waiting 0 ms 2023-07-15 13:15:53,726 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=153 updating hbase:meta row=155470c7e6fb544498395c5df80b7f1f, regionState=CLOSED 2023-07-15 13:15:53,726 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689426953726"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426953726"}]},"ts":"1689426953726"} 2023-07-15 13:15:53,726 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2. 2023-07-15 13:15:53,727 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 761ecc3594527a5a1867acde1650b342 2023-07-15 13:15:53,727 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=761ecc3594527a5a1867acde1650b342, regionState=CLOSED 2023-07-15 13:15:53,727 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689426953727"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426953727"}]},"ts":"1689426953727"} 2023-07-15 13:15:53,730 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=159, resume processing ppid=154 2023-07-15 13:15:53,730 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=159, ppid=154, state=SUCCESS; CloseRegionProcedure feff9d42b135ef22249348477a9d4e66, server=jenkins-hbase4.apache.org,38761,1689426926503 in 172 msec 2023-07-15 13:15:53,731 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=155, resume processing ppid=153 2023-07-15 13:15:53,731 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=155, ppid=153, state=SUCCESS; CloseRegionProcedure 155470c7e6fb544498395c5df80b7f1f, server=jenkins-hbase4.apache.org,44807,1689426930103 in 176 msec 2023-07-15 13:15:53,732 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=157, resume processing ppid=150 2023-07-15 13:15:53,732 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=154, ppid=149, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=feff9d42b135ef22249348477a9d4e66, UNASSIGN in 186 msec 2023-07-15 13:15:53,732 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=157, ppid=150, state=SUCCESS; CloseRegionProcedure 761ecc3594527a5a1867acde1650b342, server=jenkins-hbase4.apache.org,38761,1689426926503 in 176 msec 2023-07-15 13:15:53,733 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=149, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=155470c7e6fb544498395c5df80b7f1f, UNASSIGN in 187 msec 2023-07-15 13:15:53,734 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=149, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=761ecc3594527a5a1867acde1650b342, UNASSIGN in 188 msec 2023-07-15 13:15:53,734 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/3e7bb26803ce3014b652ebb2a595bfe2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:53,735 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2. 2023-07-15 13:15:53,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3e7bb26803ce3014b652ebb2a595bfe2: 2023-07-15 13:15:53,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3e7bb26803ce3014b652ebb2a595bfe2 2023-07-15 13:15:53,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 92051405885c311e975d76c343c538ed 2023-07-15 13:15:53,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 92051405885c311e975d76c343c538ed, disabling compactions & flushes 2023-07-15 13:15:53,738 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed. 2023-07-15 13:15:53,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed. 2023-07-15 13:15:53,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed. after waiting 0 ms 2023-07-15 13:15:53,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed. 2023-07-15 13:15:53,738 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=3e7bb26803ce3014b652ebb2a595bfe2, regionState=CLOSED 2023-07-15 13:15:53,739 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689426953738"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426953738"}]},"ts":"1689426953738"} 2023-07-15 13:15:53,742 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=156, resume processing ppid=151 2023-07-15 13:15:53,742 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=156, ppid=151, state=SUCCESS; CloseRegionProcedure 3e7bb26803ce3014b652ebb2a595bfe2, server=jenkins-hbase4.apache.org,44807,1689426930103 in 187 msec 2023-07-15 13:15:53,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/Group_testDisabledTableMove/92051405885c311e975d76c343c538ed/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:53,743 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed. 2023-07-15 13:15:53,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 92051405885c311e975d76c343c538ed: 2023-07-15 13:15:53,743 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=149, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=3e7bb26803ce3014b652ebb2a595bfe2, UNASSIGN in 198 msec 2023-07-15 13:15:53,744 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 92051405885c311e975d76c343c538ed 2023-07-15 13:15:53,745 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=152 updating hbase:meta row=92051405885c311e975d76c343c538ed, regionState=CLOSED 2023-07-15 13:15:53,745 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689426952914.92051405885c311e975d76c343c538ed.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689426953745"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426953745"}]},"ts":"1689426953745"} 2023-07-15 13:15:53,748 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=158, resume processing ppid=152 2023-07-15 13:15:53,748 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=158, ppid=152, state=SUCCESS; CloseRegionProcedure 92051405885c311e975d76c343c538ed, server=jenkins-hbase4.apache.org,44807,1689426930103 in 192 msec 2023-07-15 13:15:53,749 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=149 2023-07-15 13:15:53,749 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=149, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=92051405885c311e975d76c343c538ed, UNASSIGN in 204 msec 2023-07-15 13:15:53,750 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426953750"}]},"ts":"1689426953750"} 2023-07-15 13:15:53,751 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-15 13:15:53,752 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-15 13:15:53,754 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=149, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 217 msec 2023-07-15 13:15:53,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=149 2023-07-15 13:15:53,842 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 149 completed 2023-07-15 13:15:53,842 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_465041494 2023-07-15 13:15:53,844 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_465041494 2023-07-15 13:15:53,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_465041494 2023-07-15 13:15:53,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:53,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:53,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:53,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-15 13:15:53,851 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_465041494, current retry=0 2023-07-15 13:15:53,851 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_465041494. 2023-07-15 13:15:53,851 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:53,855 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:53,855 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:53,858 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-15 13:15:53,858 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 13:15:53,861 INFO [Listener at localhost/38739] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-15 13:15:53,861 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-15 13:15:53,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:53,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 924 service: MasterService methodName: DisableTable size: 89 connection: 172.31.14.131:36536 deadline: 1689427013861, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-15 13:15:53,863 DEBUG [Listener at localhost/38739] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-15 13:15:53,863 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-15 13:15:53,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] procedure2.ProcedureExecutor(1029): Stored pid=161, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-15 13:15:53,866 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=161, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-15 13:15:53,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_465041494' 2023-07-15 13:15:53,867 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=161, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-15 13:15:53,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_465041494 2023-07-15 13:15:53,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:53,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:53,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:53,875 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/761ecc3594527a5a1867acde1650b342 2023-07-15 13:15:53,875 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/155470c7e6fb544498395c5df80b7f1f 2023-07-15 13:15:53,875 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/feff9d42b135ef22249348477a9d4e66 2023-07-15 13:15:53,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=161 2023-07-15 13:15:53,875 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/92051405885c311e975d76c343c538ed 2023-07-15 13:15:53,875 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/3e7bb26803ce3014b652ebb2a595bfe2 2023-07-15 13:15:53,878 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/feff9d42b135ef22249348477a9d4e66/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/feff9d42b135ef22249348477a9d4e66/recovered.edits] 2023-07-15 13:15:53,878 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/155470c7e6fb544498395c5df80b7f1f/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/155470c7e6fb544498395c5df80b7f1f/recovered.edits] 2023-07-15 13:15:53,879 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/761ecc3594527a5a1867acde1650b342/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/761ecc3594527a5a1867acde1650b342/recovered.edits] 2023-07-15 13:15:53,879 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/3e7bb26803ce3014b652ebb2a595bfe2/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/3e7bb26803ce3014b652ebb2a595bfe2/recovered.edits] 2023-07-15 13:15:53,879 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/92051405885c311e975d76c343c538ed/f, FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/92051405885c311e975d76c343c538ed/recovered.edits] 2023-07-15 13:15:53,901 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/155470c7e6fb544498395c5df80b7f1f/recovered.edits/4.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/Group_testDisabledTableMove/155470c7e6fb544498395c5df80b7f1f/recovered.edits/4.seqid 2023-07-15 13:15:53,902 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/3e7bb26803ce3014b652ebb2a595bfe2/recovered.edits/4.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/Group_testDisabledTableMove/3e7bb26803ce3014b652ebb2a595bfe2/recovered.edits/4.seqid 2023-07-15 13:15:53,902 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/feff9d42b135ef22249348477a9d4e66/recovered.edits/4.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/Group_testDisabledTableMove/feff9d42b135ef22249348477a9d4e66/recovered.edits/4.seqid 2023-07-15 13:15:53,902 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/92051405885c311e975d76c343c538ed/recovered.edits/4.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/Group_testDisabledTableMove/92051405885c311e975d76c343c538ed/recovered.edits/4.seqid 2023-07-15 13:15:53,902 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/761ecc3594527a5a1867acde1650b342/recovered.edits/4.seqid to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/archive/data/default/Group_testDisabledTableMove/761ecc3594527a5a1867acde1650b342/recovered.edits/4.seqid 2023-07-15 13:15:53,902 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/155470c7e6fb544498395c5df80b7f1f 2023-07-15 13:15:53,903 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/3e7bb26803ce3014b652ebb2a595bfe2 2023-07-15 13:15:53,903 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/92051405885c311e975d76c343c538ed 2023-07-15 13:15:53,903 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/feff9d42b135ef22249348477a9d4e66 2023-07-15 13:15:53,903 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/.tmp/data/default/Group_testDisabledTableMove/761ecc3594527a5a1867acde1650b342 2023-07-15 13:15:53,904 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-15 13:15:53,907 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=161, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-15 13:15:53,910 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-15 13:15:53,916 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-15 13:15:53,918 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=161, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-15 13:15:53,918 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-15 13:15:53,918 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426953918"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:53,918 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426953918"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:53,918 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689426952914.92051405885c311e975d76c343c538ed.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426953918"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:53,918 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426953918"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:53,918 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426953918"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:53,920 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-15 13:15:53,920 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 761ecc3594527a5a1867acde1650b342, NAME => 'Group_testDisabledTableMove,,1689426952914.761ecc3594527a5a1867acde1650b342.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 3e7bb26803ce3014b652ebb2a595bfe2, NAME => 'Group_testDisabledTableMove,aaaaa,1689426952914.3e7bb26803ce3014b652ebb2a595bfe2.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 92051405885c311e975d76c343c538ed, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689426952914.92051405885c311e975d76c343c538ed.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 155470c7e6fb544498395c5df80b7f1f, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689426952914.155470c7e6fb544498395c5df80b7f1f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => feff9d42b135ef22249348477a9d4e66, NAME => 'Group_testDisabledTableMove,zzzzz,1689426952914.feff9d42b135ef22249348477a9d4e66.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-15 13:15:53,920 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-15 13:15:53,921 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689426953920"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:53,922 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-15 13:15:53,926 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=161, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-15 13:15:53,927 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=161, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 63 msec 2023-07-15 13:15:53,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(1230): Checking to see if procedure is done pid=161 2023-07-15 13:15:53,978 INFO [Listener at localhost/38739] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 161 completed 2023-07-15 13:15:53,981 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:53,981 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:53,982 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:53,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:53,982 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:53,983 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679] to rsgroup default 2023-07-15 13:15:53,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_465041494 2023-07-15 13:15:53,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:53,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:53,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:15:53,988 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_465041494, current retry=0 2023-07-15 13:15:53,989 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34837,1689426926314, jenkins-hbase4.apache.org,37679,1689426926099] are moved back to Group_testDisabledTableMove_465041494 2023-07-15 13:15:53,989 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_465041494 => default 2023-07-15 13:15:53,989 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:53,989 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_465041494 2023-07-15 13:15:53,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:53,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:53,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-15 13:15:53,996 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:53,997 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:53,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:53,997 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:53,997 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:15:53,997 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:53,998 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:15:54,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:54,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:15:54,004 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:54,008 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:15:54,008 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:15:54,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:54,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:54,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:54,014 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:54,016 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:54,016 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:54,018 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40693] to rsgroup master 2023-07-15 13:15:54,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:54,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 958 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36536 deadline: 1689428154018, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. 2023-07-15 13:15:54,019 WARN [Listener at localhost/38739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:15:54,020 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:54,021 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:54,021 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:54,021 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:44807], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:15:54,022 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:54,022 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:54,048 INFO [Listener at localhost/38739] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=525 (was 523) Potentially hanging thread: hconnection-0x22934466-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1300411426_17 at /127.0.0.1:45118 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4925918a-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1068863705_17 at /127.0.0.1:34870 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=823 (was 796) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=329 (was 329), ProcessCount=170 (was 170), AvailableMemoryMB=5236 (was 5230) - AvailableMemoryMB LEAK? - 2023-07-15 13:15:54,048 WARN [Listener at localhost/38739] hbase.ResourceChecker(130): Thread=525 is superior to 500 2023-07-15 13:15:54,071 INFO [Listener at localhost/38739] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=525, OpenFileDescriptor=823, MaxFileDescriptor=60000, SystemLoadAverage=329, ProcessCount=170, AvailableMemoryMB=5236 2023-07-15 13:15:54,071 WARN [Listener at localhost/38739] hbase.ResourceChecker(130): Thread=525 is superior to 500 2023-07-15 13:15:54,071 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-15 13:15:54,076 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:54,076 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:54,077 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:15:54,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:15:54,077 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:15:54,078 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:15:54,078 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:15:54,079 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:15:54,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:54,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:15:54,090 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:15:54,096 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:15:54,096 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:15:54,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:54,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:15:54,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:15:54,102 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:15:54,105 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:54,105 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:54,107 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40693] to rsgroup master 2023-07-15 13:15:54,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:15:54,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] ipc.CallRunner(144): callId: 986 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36536 deadline: 1689428154107, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. 2023-07-15 13:15:54,108 WARN [Listener at localhost/38739] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40693 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:15:54,110 INFO [Listener at localhost/38739] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:54,111 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:54,111 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:54,111 INFO [Listener at localhost/38739] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34837, jenkins-hbase4.apache.org:37679, jenkins-hbase4.apache.org:38761, jenkins-hbase4.apache.org:44807], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:15:54,112 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:15:54,112 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40693] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:15:54,112 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-15 13:15:54,113 INFO [Listener at localhost/38739] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-15 13:15:54,113 DEBUG [Listener at localhost/38739] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x29b587e1 to 127.0.0.1:54157 2023-07-15 13:15:54,113 DEBUG [Listener at localhost/38739] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:15:54,114 DEBUG [Listener at localhost/38739] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-15 13:15:54,114 DEBUG [Listener at localhost/38739] util.JVMClusterUtil(257): Found active master hash=807557898, stopped=false 2023-07-15 13:15:54,114 DEBUG [Listener at localhost/38739] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-15 13:15:54,114 DEBUG [Listener at localhost/38739] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-15 13:15:54,114 INFO [Listener at localhost/38739] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,40693,1689426924021 2023-07-15 13:15:54,117 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 13:15:54,117 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 13:15:54,118 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 13:15:54,117 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:44807-0x101691f914d000b, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 13:15:54,117 INFO [Listener at localhost/38739] procedure2.ProcedureExecutor(629): Stopping 2023-07-15 13:15:54,118 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 13:15:54,118 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:54,119 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44807-0x101691f914d000b, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:15:54,119 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:15:54,119 DEBUG [Listener at localhost/38739] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1a068dda to 127.0.0.1:54157 2023-07-15 13:15:54,118 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:15:54,119 DEBUG [Listener at localhost/38739] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:15:54,119 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:15:54,119 INFO [Listener at localhost/38739] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37679,1689426926099' ***** 2023-07-15 13:15:54,119 INFO [Listener at localhost/38739] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 13:15:54,119 INFO [Listener at localhost/38739] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34837,1689426926314' ***** 2023-07-15 13:15:54,120 INFO [Listener at localhost/38739] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 13:15:54,119 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:15:54,120 INFO [RS:0;jenkins-hbase4:37679] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 13:15:54,120 INFO [RS:1;jenkins-hbase4:34837] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 13:15:54,121 INFO [Listener at localhost/38739] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38761,1689426926503' ***** 2023-07-15 13:15:54,125 INFO [Listener at localhost/38739] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 13:15:54,125 INFO [Listener at localhost/38739] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44807,1689426930103' ***** 2023-07-15 13:15:54,128 INFO [Listener at localhost/38739] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 13:15:54,128 INFO [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 13:15:54,125 INFO [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 13:15:54,145 INFO [RS:0;jenkins-hbase4:37679] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@46748067{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:15:54,147 INFO [RS:3;jenkins-hbase4:44807] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@33753ee9{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:15:54,147 INFO [RS:2;jenkins-hbase4:38761] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@db55a1c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:15:54,147 INFO [RS:1;jenkins-hbase4:34837] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@36ab60ce{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:15:54,150 INFO [RS:3;jenkins-hbase4:44807] server.AbstractConnector(383): Stopped ServerConnector@2652937{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 13:15:54,150 INFO [RS:1;jenkins-hbase4:34837] server.AbstractConnector(383): Stopped ServerConnector@1dc2f9d9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 13:15:54,150 INFO [RS:3;jenkins-hbase4:44807] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 13:15:54,150 INFO [RS:1;jenkins-hbase4:34837] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 13:15:54,150 INFO [RS:0;jenkins-hbase4:37679] server.AbstractConnector(383): Stopped ServerConnector@2f894590{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 13:15:54,151 INFO [RS:0;jenkins-hbase4:37679] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 13:15:54,155 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:15:54,155 INFO [RS:3;jenkins-hbase4:44807] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@36e2e513{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-15 13:15:54,155 INFO [RS:2;jenkins-hbase4:38761] server.AbstractConnector(383): Stopped ServerConnector@2286bf8a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 13:15:54,155 INFO [RS:2;jenkins-hbase4:38761] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 13:15:54,155 INFO [RS:1;jenkins-hbase4:34837] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@f68fdb3{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-15 13:15:54,156 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 13:15:54,156 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:15:54,155 INFO [RS:0;jenkins-hbase4:37679] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@21aec1e3{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-15 13:15:54,157 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 13:15:54,157 INFO [RS:1;jenkins-hbase4:34837] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1536303c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/hadoop.log.dir/,STOPPED} 2023-07-15 13:15:54,158 INFO [RS:2;jenkins-hbase4:38761] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4a081fb2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-15 13:15:54,157 INFO [RS:3;jenkins-hbase4:44807] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@38634af7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/hadoop.log.dir/,STOPPED} 2023-07-15 13:15:54,157 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 13:15:54,160 INFO [RS:0;jenkins-hbase4:37679] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@40e7421c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/hadoop.log.dir/,STOPPED} 2023-07-15 13:15:54,161 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:15:54,161 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:15:54,161 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 13:15:54,161 INFO [RS:2;jenkins-hbase4:38761] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@be6058f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/hadoop.log.dir/,STOPPED} 2023-07-15 13:15:54,162 INFO [RS:2;jenkins-hbase4:38761] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 13:15:54,163 INFO [RS:2;jenkins-hbase4:38761] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 13:15:54,163 INFO [RS:3;jenkins-hbase4:44807] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 13:15:54,163 INFO [RS:3;jenkins-hbase4:44807] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 13:15:54,163 INFO [RS:3;jenkins-hbase4:44807] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 13:15:54,163 INFO [RS:2;jenkins-hbase4:38761] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 13:15:54,163 INFO [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(3305): Received CLOSE for d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:54,163 INFO [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(3305): Received CLOSE for e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:54,163 INFO [RS:1;jenkins-hbase4:34837] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 13:15:54,164 INFO [RS:1;jenkins-hbase4:34837] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 13:15:54,164 INFO [RS:1;jenkins-hbase4:34837] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 13:15:54,164 INFO [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(3305): Received CLOSE for 0ee65cdb74bf12e0dc6b097a112f439b 2023-07-15 13:15:54,164 INFO [RS:1;jenkins-hbase4:34837] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:54,164 INFO [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(3305): Received CLOSE for 7cbf6b928f4a5822d479aa1ed8d58489 2023-07-15 13:15:54,164 DEBUG [RS:1;jenkins-hbase4:34837] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x48f96027 to 127.0.0.1:54157 2023-07-15 13:15:54,164 INFO [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:54,167 DEBUG [RS:2;jenkins-hbase4:38761] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x00bf6c8b to 127.0.0.1:54157 2023-07-15 13:15:54,166 INFO [RS:0;jenkins-hbase4:37679] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 13:15:54,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e42fabbf20609a275edbe64c71867bfc, disabling compactions & flushes 2023-07-15 13:15:54,167 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:54,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:54,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. after waiting 0 ms 2023-07-15 13:15:54,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:54,167 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e42fabbf20609a275edbe64c71867bfc 1/1 column families, dataSize=22.07 KB heapSize=36.54 KB 2023-07-15 13:15:54,166 DEBUG [RS:1;jenkins-hbase4:34837] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:15:54,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d8243996734f4532a436ec0fa187e59a, disabling compactions & flushes 2023-07-15 13:15:54,168 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:54,164 INFO [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:54,168 INFO [RS:1;jenkins-hbase4:34837] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34837,1689426926314; all regions closed. 2023-07-15 13:15:54,168 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:54,168 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. after waiting 0 ms 2023-07-15 13:15:54,168 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:54,168 DEBUG [RS:3;jenkins-hbase4:44807] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3bf03c53 to 127.0.0.1:54157 2023-07-15 13:15:54,168 DEBUG [RS:3;jenkins-hbase4:44807] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:15:54,167 INFO [RS:0;jenkins-hbase4:37679] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 13:15:54,169 INFO [RS:3;jenkins-hbase4:44807] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 13:15:54,169 INFO [RS:3;jenkins-hbase4:44807] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 13:15:54,169 INFO [RS:3;jenkins-hbase4:44807] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 13:15:54,169 INFO [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-15 13:15:54,167 DEBUG [RS:2;jenkins-hbase4:38761] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:15:54,169 INFO [RS:0;jenkins-hbase4:37679] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 13:15:54,169 INFO [RS:0;jenkins-hbase4:37679] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:54,169 INFO [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-15 13:15:54,169 DEBUG [RS:0;jenkins-hbase4:37679] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3d03a0a2 to 127.0.0.1:54157 2023-07-15 13:15:54,169 DEBUG [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(1478): Online Regions={d8243996734f4532a436ec0fa187e59a=testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a.} 2023-07-15 13:15:54,169 DEBUG [RS:0;jenkins-hbase4:37679] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:15:54,169 INFO [RS:0;jenkins-hbase4:37679] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37679,1689426926099; all regions closed. 2023-07-15 13:15:54,170 DEBUG [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(1504): Waiting on d8243996734f4532a436ec0fa187e59a 2023-07-15 13:15:54,183 INFO [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-15 13:15:54,183 DEBUG [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(1478): Online Regions={e42fabbf20609a275edbe64c71867bfc=hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc., 0ee65cdb74bf12e0dc6b097a112f439b=hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b., 1588230740=hbase:meta,,1.1588230740, 7cbf6b928f4a5822d479aa1ed8d58489=unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489.} 2023-07-15 13:15:54,183 DEBUG [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(1504): Waiting on 0ee65cdb74bf12e0dc6b097a112f439b, 1588230740, 7cbf6b928f4a5822d479aa1ed8d58489, e42fabbf20609a275edbe64c71867bfc 2023-07-15 13:15:54,184 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-15 13:15:54,184 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-15 13:15:54,184 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-15 13:15:54,184 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-15 13:15:54,184 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-15 13:15:54,185 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=37.46 KB heapSize=61.09 KB 2023-07-15 13:15:54,201 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/testRename/d8243996734f4532a436ec0fa187e59a/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-15 13:15:54,204 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:54,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d8243996734f4532a436ec0fa187e59a: 2023-07-15 13:15:54,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689426947303.d8243996734f4532a436ec0fa187e59a. 2023-07-15 13:15:54,216 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-15 13:15:54,216 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-15 13:15:54,219 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,34837,1689426926314/jenkins-hbase4.apache.org%2C34837%2C1689426926314.meta.1689426929078.meta not finished, retry = 0 2023-07-15 13:15:54,230 DEBUG [RS:0;jenkins-hbase4:37679] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/oldWALs 2023-07-15 13:15:54,230 INFO [RS:0;jenkins-hbase4:37679] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37679%2C1689426926099:(num 1689426928709) 2023-07-15 13:15:54,230 DEBUG [RS:0;jenkins-hbase4:37679] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:15:54,230 INFO [RS:0;jenkins-hbase4:37679] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:15:54,243 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.07 KB at sequenceid=107 (bloomFilter=true), to=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/.tmp/m/2b79ff93d78e4e0a90fae1337bf1362b 2023-07-15 13:15:54,251 INFO [RS:0;jenkins-hbase4:37679] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-15 13:15:54,251 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2b79ff93d78e4e0a90fae1337bf1362b 2023-07-15 13:15:54,251 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 13:15:54,251 INFO [RS:0;jenkins-hbase4:37679] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 13:15:54,252 INFO [RS:0;jenkins-hbase4:37679] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 13:15:54,252 INFO [RS:0;jenkins-hbase4:37679] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 13:15:54,252 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/.tmp/m/2b79ff93d78e4e0a90fae1337bf1362b as hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/m/2b79ff93d78e4e0a90fae1337bf1362b 2023-07-15 13:15:54,259 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2b79ff93d78e4e0a90fae1337bf1362b 2023-07-15 13:15:54,260 INFO [RS:0;jenkins-hbase4:37679] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37679 2023-07-15 13:15:54,264 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/m/2b79ff93d78e4e0a90fae1337bf1362b, entries=22, sequenceid=107, filesize=5.9 K 2023-07-15 13:15:54,267 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.07 KB/22601, heapSize ~36.52 KB/37400, currentSize=0 B/0 for e42fabbf20609a275edbe64c71867bfc in 100ms, sequenceid=107, compaction requested=true 2023-07-15 13:15:54,270 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:54,270 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:44807-0x101691f914d000b, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:54,271 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:54,271 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:44807-0x101691f914d000b, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:54,271 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:54,271 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:54,270 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37679,1689426926099 2023-07-15 13:15:54,271 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:54,271 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:54,272 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37679,1689426926099] 2023-07-15 13:15:54,272 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37679,1689426926099; numProcessing=1 2023-07-15 13:15:54,273 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37679,1689426926099 already deleted, retry=false 2023-07-15 13:15:54,273 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37679,1689426926099 expired; onlineServers=3 2023-07-15 13:15:54,279 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=34.54 KB at sequenceid=220 (bloomFilter=false), to=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/.tmp/info/82b0529b718d4daf9d7ca98820cd9b37 2023-07-15 13:15:54,289 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/rsgroup/e42fabbf20609a275edbe64c71867bfc/recovered.edits/110.seqid, newMaxSeqId=110, maxSeqId=35 2023-07-15 13:15:54,289 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 13:15:54,290 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:54,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e42fabbf20609a275edbe64c71867bfc: 2023-07-15 13:15:54,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689426929411.e42fabbf20609a275edbe64c71867bfc. 2023-07-15 13:15:54,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0ee65cdb74bf12e0dc6b097a112f439b, disabling compactions & flushes 2023-07-15 13:15:54,290 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:54,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:54,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. after waiting 0 ms 2023-07-15 13:15:54,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:54,292 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 82b0529b718d4daf9d7ca98820cd9b37 2023-07-15 13:15:54,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/namespace/0ee65cdb74bf12e0dc6b097a112f439b/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-15 13:15:54,305 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:54,305 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0ee65cdb74bf12e0dc6b097a112f439b: 2023-07-15 13:15:54,305 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689426929310.0ee65cdb74bf12e0dc6b097a112f439b. 2023-07-15 13:15:54,306 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7cbf6b928f4a5822d479aa1ed8d58489, disabling compactions & flushes 2023-07-15 13:15:54,306 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:54,306 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:54,307 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. after waiting 0 ms 2023-07-15 13:15:54,307 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:54,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/default/unmovedTable/7cbf6b928f4a5822d479aa1ed8d58489/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-15 13:15:54,315 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:54,315 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7cbf6b928f4a5822d479aa1ed8d58489: 2023-07-15 13:15:54,315 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689426948963.7cbf6b928f4a5822d479aa1ed8d58489. 2023-07-15 13:15:54,316 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=220 (bloomFilter=false), to=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/.tmp/rep_barrier/7c0741e63ccd46568106b5ab5cad6a6b 2023-07-15 13:15:54,322 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7c0741e63ccd46568106b5ab5cad6a6b 2023-07-15 13:15:54,324 DEBUG [RS:1;jenkins-hbase4:34837] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/oldWALs 2023-07-15 13:15:54,324 INFO [RS:1;jenkins-hbase4:34837] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34837%2C1689426926314.meta:.meta(num 1689426929078) 2023-07-15 13:15:54,340 DEBUG [RS:1;jenkins-hbase4:34837] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/oldWALs 2023-07-15 13:15:54,340 INFO [RS:1;jenkins-hbase4:34837] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34837%2C1689426926314:(num 1689426928700) 2023-07-15 13:15:54,341 DEBUG [RS:1;jenkins-hbase4:34837] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:15:54,341 INFO [RS:1;jenkins-hbase4:34837] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:15:54,341 INFO [RS:1;jenkins-hbase4:34837] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-15 13:15:54,342 INFO [RS:1;jenkins-hbase4:34837] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 13:15:54,342 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 13:15:54,342 INFO [RS:1;jenkins-hbase4:34837] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 13:15:54,343 INFO [RS:1;jenkins-hbase4:34837] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 13:15:54,344 INFO [RS:1;jenkins-hbase4:34837] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34837 2023-07-15 13:15:54,364 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=220 (bloomFilter=false), to=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/.tmp/table/999b354e5ebc4c7c8ebe9ce1e3b8c6c9 2023-07-15 13:15:54,370 INFO [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38761,1689426926503; all regions closed. 2023-07-15 13:15:54,370 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 999b354e5ebc4c7c8ebe9ce1e3b8c6c9 2023-07-15 13:15:54,371 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/.tmp/info/82b0529b718d4daf9d7ca98820cd9b37 as hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/info/82b0529b718d4daf9d7ca98820cd9b37 2023-07-15 13:15:54,379 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:15:54,379 INFO [RS:0;jenkins-hbase4:37679] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37679,1689426926099; zookeeper connection closed. 2023-07-15 13:15:54,379 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:37679-0x101691f914d0001, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:15:54,380 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 82b0529b718d4daf9d7ca98820cd9b37 2023-07-15 13:15:54,380 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:44807-0x101691f914d000b, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:54,380 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:54,380 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:54,380 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34837,1689426926314 2023-07-15 13:15:54,380 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/info/82b0529b718d4daf9d7ca98820cd9b37, entries=62, sequenceid=220, filesize=11.8 K 2023-07-15 13:15:54,381 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34837,1689426926314] 2023-07-15 13:15:54,381 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34837,1689426926314; numProcessing=2 2023-07-15 13:15:54,382 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/.tmp/rep_barrier/7c0741e63ccd46568106b5ab5cad6a6b as hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/rep_barrier/7c0741e63ccd46568106b5ab5cad6a6b 2023-07-15 13:15:54,382 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34837,1689426926314 already deleted, retry=false 2023-07-15 13:15:54,383 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34837,1689426926314 expired; onlineServers=2 2023-07-15 13:15:54,383 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@19dc0ad5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@19dc0ad5 2023-07-15 13:15:54,383 DEBUG [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-15 13:15:54,387 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/WALs/jenkins-hbase4.apache.org,38761,1689426926503/jenkins-hbase4.apache.org%2C38761%2C1689426926503.meta.1689426931237.meta not finished, retry = 0 2023-07-15 13:15:54,396 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7c0741e63ccd46568106b5ab5cad6a6b 2023-07-15 13:15:54,396 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/rep_barrier/7c0741e63ccd46568106b5ab5cad6a6b, entries=8, sequenceid=220, filesize=5.8 K 2023-07-15 13:15:54,397 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/.tmp/table/999b354e5ebc4c7c8ebe9ce1e3b8c6c9 as hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/table/999b354e5ebc4c7c8ebe9ce1e3b8c6c9 2023-07-15 13:15:54,404 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 999b354e5ebc4c7c8ebe9ce1e3b8c6c9 2023-07-15 13:15:54,404 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/table/999b354e5ebc4c7c8ebe9ce1e3b8c6c9, entries=16, sequenceid=220, filesize=6.0 K 2023-07-15 13:15:54,410 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~37.46 KB/38356, heapSize ~61.05 KB/62512, currentSize=0 B/0 for 1588230740 in 226ms, sequenceid=220, compaction requested=true 2023-07-15 13:15:54,428 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/data/hbase/meta/1588230740/recovered.edits/223.seqid, newMaxSeqId=223, maxSeqId=108 2023-07-15 13:15:54,429 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 13:15:54,429 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-15 13:15:54,429 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-15 13:15:54,430 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-15 13:15:54,430 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-15 13:15:54,430 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-15 13:15:54,490 DEBUG [RS:2;jenkins-hbase4:38761] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/oldWALs 2023-07-15 13:15:54,490 INFO [RS:2;jenkins-hbase4:38761] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38761%2C1689426926503.meta:.meta(num 1689426931237) 2023-07-15 13:15:54,497 DEBUG [RS:2;jenkins-hbase4:38761] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/oldWALs 2023-07-15 13:15:54,497 INFO [RS:2;jenkins-hbase4:38761] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38761%2C1689426926503:(num 1689426928728) 2023-07-15 13:15:54,497 DEBUG [RS:2;jenkins-hbase4:38761] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:15:54,497 INFO [RS:2;jenkins-hbase4:38761] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:15:54,497 INFO [RS:2;jenkins-hbase4:38761] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-15 13:15:54,497 INFO [RS:2;jenkins-hbase4:38761] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 13:15:54,497 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 13:15:54,497 INFO [RS:2;jenkins-hbase4:38761] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 13:15:54,497 INFO [RS:2;jenkins-hbase4:38761] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 13:15:54,498 INFO [RS:2;jenkins-hbase4:38761] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38761 2023-07-15 13:15:54,502 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:54,502 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:54,502 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:44807-0x101691f914d000b, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38761,1689426926503 2023-07-15 13:15:54,503 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38761,1689426926503] 2023-07-15 13:15:54,503 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38761,1689426926503; numProcessing=3 2023-07-15 13:15:54,504 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38761,1689426926503 already deleted, retry=false 2023-07-15 13:15:54,504 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38761,1689426926503 expired; onlineServers=1 2023-07-15 13:15:54,584 INFO [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44807,1689426930103; all regions closed. 2023-07-15 13:15:54,589 DEBUG [RS:3;jenkins-hbase4:44807] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/oldWALs 2023-07-15 13:15:54,589 INFO [RS:3;jenkins-hbase4:44807] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44807%2C1689426930103.meta:.meta(num 1689426938437) 2023-07-15 13:15:54,596 DEBUG [RS:3;jenkins-hbase4:44807] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/oldWALs 2023-07-15 13:15:54,596 INFO [RS:3;jenkins-hbase4:44807] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44807%2C1689426930103:(num 1689426930484) 2023-07-15 13:15:54,596 DEBUG [RS:3;jenkins-hbase4:44807] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:15:54,597 INFO [RS:3;jenkins-hbase4:44807] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:15:54,597 INFO [RS:3;jenkins-hbase4:44807] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-15 13:15:54,597 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 13:15:54,598 INFO [RS:3;jenkins-hbase4:44807] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44807 2023-07-15 13:15:54,600 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:54,600 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:44807-0x101691f914d000b, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44807,1689426930103 2023-07-15 13:15:54,600 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44807,1689426930103] 2023-07-15 13:15:54,600 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44807,1689426930103; numProcessing=4 2023-07-15 13:15:54,602 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44807,1689426930103 already deleted, retry=false 2023-07-15 13:15:54,602 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44807,1689426930103 expired; onlineServers=0 2023-07-15 13:15:54,602 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40693,1689426924021' ***** 2023-07-15 13:15:54,602 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-15 13:15:54,603 DEBUG [M:0;jenkins-hbase4:40693] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@711f5131, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 13:15:54,603 INFO [M:0;jenkins-hbase4:40693] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 13:15:54,605 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-15 13:15:54,605 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:54,606 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 13:15:54,606 INFO [M:0;jenkins-hbase4:40693] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@664114d7{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-15 13:15:54,606 INFO [M:0;jenkins-hbase4:40693] server.AbstractConnector(383): Stopped ServerConnector@40debc20{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 13:15:54,606 INFO [M:0;jenkins-hbase4:40693] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 13:15:54,607 INFO [M:0;jenkins-hbase4:40693] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@577a3a17{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-15 13:15:54,607 INFO [M:0;jenkins-hbase4:40693] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@128132e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/hadoop.log.dir/,STOPPED} 2023-07-15 13:15:54,608 INFO [M:0;jenkins-hbase4:40693] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40693,1689426924021 2023-07-15 13:15:54,608 INFO [M:0;jenkins-hbase4:40693] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40693,1689426924021; all regions closed. 2023-07-15 13:15:54,608 DEBUG [M:0;jenkins-hbase4:40693] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:15:54,608 INFO [M:0;jenkins-hbase4:40693] master.HMaster(1491): Stopping master jetty server 2023-07-15 13:15:54,608 INFO [M:0;jenkins-hbase4:40693] server.AbstractConnector(383): Stopped ServerConnector@1d76baff{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 13:15:54,609 DEBUG [M:0;jenkins-hbase4:40693] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-15 13:15:54,609 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-15 13:15:54,609 DEBUG [M:0;jenkins-hbase4:40693] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-15 13:15:54,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689426928233] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689426928233,5,FailOnTimeoutGroup] 2023-07-15 13:15:54,609 INFO [M:0;jenkins-hbase4:40693] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-15 13:15:54,609 INFO [M:0;jenkins-hbase4:40693] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-15 13:15:54,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689426928235] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689426928235,5,FailOnTimeoutGroup] 2023-07-15 13:15:54,609 INFO [M:0;jenkins-hbase4:40693] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-15 13:15:54,609 DEBUG [M:0;jenkins-hbase4:40693] master.HMaster(1512): Stopping service threads 2023-07-15 13:15:54,609 INFO [M:0;jenkins-hbase4:40693] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-15 13:15:54,610 ERROR [M:0;jenkins-hbase4:40693] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-15 13:15:54,610 INFO [M:0;jenkins-hbase4:40693] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-15 13:15:54,610 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-15 13:15:54,611 DEBUG [M:0;jenkins-hbase4:40693] zookeeper.ZKUtil(398): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-15 13:15:54,611 WARN [M:0;jenkins-hbase4:40693] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-15 13:15:54,611 INFO [M:0;jenkins-hbase4:40693] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-15 13:15:54,611 INFO [M:0;jenkins-hbase4:40693] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-15 13:15:54,611 DEBUG [M:0;jenkins-hbase4:40693] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-15 13:15:54,611 INFO [M:0;jenkins-hbase4:40693] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:15:54,611 DEBUG [M:0;jenkins-hbase4:40693] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:15:54,611 DEBUG [M:0;jenkins-hbase4:40693] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-15 13:15:54,611 DEBUG [M:0;jenkins-hbase4:40693] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:15:54,612 INFO [M:0;jenkins-hbase4:40693] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=538.36 KB heapSize=644.68 KB 2023-07-15 13:15:54,617 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:15:54,617 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:38761-0x101691f914d0003, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:15:54,617 INFO [RS:2;jenkins-hbase4:38761] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38761,1689426926503; zookeeper connection closed. 2023-07-15 13:15:54,617 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6c119295] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6c119295 2023-07-15 13:15:54,624 INFO [M:0;jenkins-hbase4:40693] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=538.36 KB at sequenceid=1200 (bloomFilter=true), to=hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/ce5f37f3924848a68831881655998776 2023-07-15 13:15:54,630 DEBUG [M:0;jenkins-hbase4:40693] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/ce5f37f3924848a68831881655998776 as hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/ce5f37f3924848a68831881655998776 2023-07-15 13:15:54,635 INFO [M:0;jenkins-hbase4:40693] regionserver.HStore(1080): Added hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/ce5f37f3924848a68831881655998776, entries=160, sequenceid=1200, filesize=28.1 K 2023-07-15 13:15:54,636 INFO [M:0;jenkins-hbase4:40693] regionserver.HRegion(2948): Finished flush of dataSize ~538.36 KB/551284, heapSize ~644.66 KB/660136, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 25ms, sequenceid=1200, compaction requested=false 2023-07-15 13:15:54,637 INFO [M:0;jenkins-hbase4:40693] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:15:54,637 DEBUG [M:0;jenkins-hbase4:40693] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 13:15:54,641 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 13:15:54,641 INFO [M:0;jenkins-hbase4:40693] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-15 13:15:54,641 INFO [M:0;jenkins-hbase4:40693] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40693 2023-07-15 13:15:54,643 DEBUG [M:0;jenkins-hbase4:40693] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,40693,1689426924021 already deleted, retry=false 2023-07-15 13:15:54,717 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:15:54,717 INFO [RS:1;jenkins-hbase4:34837] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34837,1689426926314; zookeeper connection closed. 2023-07-15 13:15:54,717 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:34837-0x101691f914d0002, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:15:54,718 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1d847841] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1d847841 2023-07-15 13:15:54,918 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:15:54,918 INFO [M:0;jenkins-hbase4:40693] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40693,1689426924021; zookeeper connection closed. 2023-07-15 13:15:54,918 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): master:40693-0x101691f914d0000, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:15:55,018 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:44807-0x101691f914d000b, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:15:55,018 INFO [RS:3;jenkins-hbase4:44807] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44807,1689426930103; zookeeper connection closed. 2023-07-15 13:15:55,018 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): regionserver:44807-0x101691f914d000b, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:15:55,019 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4e22c52a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4e22c52a 2023-07-15 13:15:55,019 INFO [Listener at localhost/38739] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-15 13:15:55,019 WARN [Listener at localhost/38739] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 13:15:55,024 INFO [Listener at localhost/38739] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 13:15:55,127 WARN [BP-473868280-172.31.14.131-1689426920401 heartbeating to localhost/127.0.0.1:42517] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-15 13:15:55,127 WARN [BP-473868280-172.31.14.131-1689426920401 heartbeating to localhost/127.0.0.1:42517] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-473868280-172.31.14.131-1689426920401 (Datanode Uuid 8f649353-adf6-4753-909f-1c23368d8c9e) service to localhost/127.0.0.1:42517 2023-07-15 13:15:55,128 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/cluster_545a304e-a9e7-1a1d-1dcd-66099efd16d1/dfs/data/data5/current/BP-473868280-172.31.14.131-1689426920401] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:15:55,129 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/cluster_545a304e-a9e7-1a1d-1dcd-66099efd16d1/dfs/data/data6/current/BP-473868280-172.31.14.131-1689426920401] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:15:55,131 WARN [Listener at localhost/38739] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 13:15:55,133 INFO [Listener at localhost/38739] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 13:15:55,237 WARN [BP-473868280-172.31.14.131-1689426920401 heartbeating to localhost/127.0.0.1:42517] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-15 13:15:55,237 WARN [BP-473868280-172.31.14.131-1689426920401 heartbeating to localhost/127.0.0.1:42517] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-473868280-172.31.14.131-1689426920401 (Datanode Uuid 5b565817-c1a0-4dde-bff3-3b0b4d751122) service to localhost/127.0.0.1:42517 2023-07-15 13:15:55,238 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/cluster_545a304e-a9e7-1a1d-1dcd-66099efd16d1/dfs/data/data3/current/BP-473868280-172.31.14.131-1689426920401] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:15:55,239 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/cluster_545a304e-a9e7-1a1d-1dcd-66099efd16d1/dfs/data/data4/current/BP-473868280-172.31.14.131-1689426920401] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:15:55,240 WARN [Listener at localhost/38739] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 13:15:55,242 INFO [Listener at localhost/38739] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 13:15:55,345 WARN [BP-473868280-172.31.14.131-1689426920401 heartbeating to localhost/127.0.0.1:42517] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-15 13:15:55,345 WARN [BP-473868280-172.31.14.131-1689426920401 heartbeating to localhost/127.0.0.1:42517] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-473868280-172.31.14.131-1689426920401 (Datanode Uuid 1b1a8ca7-71ca-4e9a-aa52-f850b969373a) service to localhost/127.0.0.1:42517 2023-07-15 13:15:55,346 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/cluster_545a304e-a9e7-1a1d-1dcd-66099efd16d1/dfs/data/data1/current/BP-473868280-172.31.14.131-1689426920401] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:15:55,346 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/cluster_545a304e-a9e7-1a1d-1dcd-66099efd16d1/dfs/data/data2/current/BP-473868280-172.31.14.131-1689426920401] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:15:55,375 INFO [Listener at localhost/38739] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 13:15:55,505 INFO [Listener at localhost/38739] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-15 13:15:55,556 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-15 13:15:55,557 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-15 13:15:55,557 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/hadoop.log.dir so I do NOT create it in target/test-data/5d9c824c-4370-2116-8258-a72b72306248 2023-07-15 13:15:55,557 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1615bc6b-f00c-a933-aaea-f94ecc6584ad/hadoop.tmp.dir so I do NOT create it in target/test-data/5d9c824c-4370-2116-8258-a72b72306248 2023-07-15 13:15:55,557 INFO [Listener at localhost/38739] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/cluster_a8f5af24-9fe6-0cda-6638-0e398d161cda, deleteOnExit=true 2023-07-15 13:15:55,557 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-15 13:15:55,557 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/test.cache.data in system properties and HBase conf 2023-07-15 13:15:55,557 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/hadoop.tmp.dir in system properties and HBase conf 2023-07-15 13:15:55,557 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/hadoop.log.dir in system properties and HBase conf 2023-07-15 13:15:55,557 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-15 13:15:55,558 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-15 13:15:55,558 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-15 13:15:55,558 DEBUG [Listener at localhost/38739] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-15 13:15:55,558 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-15 13:15:55,558 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-15 13:15:55,558 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-15 13:15:55,558 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-15 13:15:55,558 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-15 13:15:55,559 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-15 13:15:55,559 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-15 13:15:55,559 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-15 13:15:55,559 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-15 13:15:55,559 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/nfs.dump.dir in system properties and HBase conf 2023-07-15 13:15:55,559 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/java.io.tmpdir in system properties and HBase conf 2023-07-15 13:15:55,559 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-15 13:15:55,559 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-15 13:15:55,559 INFO [Listener at localhost/38739] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-15 13:15:55,563 WARN [Listener at localhost/38739] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-15 13:15:55,564 WARN [Listener at localhost/38739] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-15 13:15:55,601 DEBUG [Listener at localhost/38739-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101691f914d000a, quorum=127.0.0.1:54157, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-15 13:15:55,601 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101691f914d000a, quorum=127.0.0.1:54157, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-15 13:15:55,607 WARN [Listener at localhost/38739] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 13:15:55,608 INFO [Listener at localhost/38739] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 13:15:55,612 INFO [Listener at localhost/38739] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/java.io.tmpdir/Jetty_localhost_43075_hdfs____kt6qxl/webapp 2023-07-15 13:15:55,705 INFO [Listener at localhost/38739] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43075 2023-07-15 13:15:55,711 WARN [Listener at localhost/38739] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-15 13:15:55,711 WARN [Listener at localhost/38739] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-15 13:15:55,759 WARN [Listener at localhost/34061] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 13:15:55,774 WARN [Listener at localhost/34061] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 13:15:55,778 WARN [Listener at localhost/34061] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 13:15:55,780 INFO [Listener at localhost/34061] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 13:15:55,785 INFO [Listener at localhost/34061] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/java.io.tmpdir/Jetty_localhost_41161_datanode____.pe1ea2/webapp 2023-07-15 13:15:55,919 INFO [Listener at localhost/34061] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41161 2023-07-15 13:15:55,923 WARN [Listener at localhost/36763] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 13:15:55,942 WARN [Listener at localhost/36763] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-15 13:15:55,993 WARN [Listener at localhost/36763] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 13:15:55,996 WARN [Listener at localhost/36763] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 13:15:55,997 INFO [Listener at localhost/36763] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 13:15:56,004 INFO [Listener at localhost/36763] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/java.io.tmpdir/Jetty_localhost_35309_datanode____.kbzd4v/webapp 2023-07-15 13:15:56,058 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x99d7033e4ee0386a: Processing first storage report for DS-ba4c84fb-99c3-41fc-a416-ef34c0e25e11 from datanode 61846601-6be8-45f0-9aa7-87a6dc8df716 2023-07-15 13:15:56,059 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x99d7033e4ee0386a: from storage DS-ba4c84fb-99c3-41fc-a416-ef34c0e25e11 node DatanodeRegistration(127.0.0.1:42579, datanodeUuid=61846601-6be8-45f0-9aa7-87a6dc8df716, infoPort=43699, infoSecurePort=0, ipcPort=36763, storageInfo=lv=-57;cid=testClusterID;nsid=1611345649;c=1689426955566), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-15 13:15:56,059 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x99d7033e4ee0386a: Processing first storage report for DS-b57f1b24-eb62-4ea1-acf8-f161293bcc84 from datanode 61846601-6be8-45f0-9aa7-87a6dc8df716 2023-07-15 13:15:56,059 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x99d7033e4ee0386a: from storage DS-b57f1b24-eb62-4ea1-acf8-f161293bcc84 node DatanodeRegistration(127.0.0.1:42579, datanodeUuid=61846601-6be8-45f0-9aa7-87a6dc8df716, infoPort=43699, infoSecurePort=0, ipcPort=36763, storageInfo=lv=-57;cid=testClusterID;nsid=1611345649;c=1689426955566), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 13:15:56,110 INFO [Listener at localhost/36763] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35309 2023-07-15 13:15:56,120 WARN [Listener at localhost/38967] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 13:15:56,145 WARN [Listener at localhost/38967] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 13:15:56,150 WARN [Listener at localhost/38967] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 13:15:56,152 INFO [Listener at localhost/38967] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 13:15:56,155 INFO [Listener at localhost/38967] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/java.io.tmpdir/Jetty_localhost_41157_datanode____ovynz1/webapp 2023-07-15 13:15:56,242 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4fd91e8c8d2dd06c: Processing first storage report for DS-9244a2bc-29cf-4d1b-bfb6-235f21ada008 from datanode 8241a952-6f63-402e-af90-f07638c938ea 2023-07-15 13:15:56,242 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4fd91e8c8d2dd06c: from storage DS-9244a2bc-29cf-4d1b-bfb6-235f21ada008 node DatanodeRegistration(127.0.0.1:41675, datanodeUuid=8241a952-6f63-402e-af90-f07638c938ea, infoPort=45399, infoSecurePort=0, ipcPort=38967, storageInfo=lv=-57;cid=testClusterID;nsid=1611345649;c=1689426955566), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 13:15:56,243 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4fd91e8c8d2dd06c: Processing first storage report for DS-5c5640d7-f775-4285-9ef0-804bbdfd493d from datanode 8241a952-6f63-402e-af90-f07638c938ea 2023-07-15 13:15:56,243 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4fd91e8c8d2dd06c: from storage DS-5c5640d7-f775-4285-9ef0-804bbdfd493d node DatanodeRegistration(127.0.0.1:41675, datanodeUuid=8241a952-6f63-402e-af90-f07638c938ea, infoPort=45399, infoSecurePort=0, ipcPort=38967, storageInfo=lv=-57;cid=testClusterID;nsid=1611345649;c=1689426955566), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 13:15:56,261 INFO [Listener at localhost/38967] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41157 2023-07-15 13:15:56,270 WARN [Listener at localhost/41271] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 13:15:56,369 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8f25360d1e2fe4: Processing first storage report for DS-caf7a4b7-1eac-4f10-81f1-d45ee148d316 from datanode b7331fe9-5fad-4222-8186-e073ac1a9d01 2023-07-15 13:15:56,369 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8f25360d1e2fe4: from storage DS-caf7a4b7-1eac-4f10-81f1-d45ee148d316 node DatanodeRegistration(127.0.0.1:33371, datanodeUuid=b7331fe9-5fad-4222-8186-e073ac1a9d01, infoPort=34515, infoSecurePort=0, ipcPort=41271, storageInfo=lv=-57;cid=testClusterID;nsid=1611345649;c=1689426955566), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 13:15:56,369 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8f25360d1e2fe4: Processing first storage report for DS-4220a73c-983e-4a7d-b659-b73087b0d333 from datanode b7331fe9-5fad-4222-8186-e073ac1a9d01 2023-07-15 13:15:56,369 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8f25360d1e2fe4: from storage DS-4220a73c-983e-4a7d-b659-b73087b0d333 node DatanodeRegistration(127.0.0.1:33371, datanodeUuid=b7331fe9-5fad-4222-8186-e073ac1a9d01, infoPort=34515, infoSecurePort=0, ipcPort=41271, storageInfo=lv=-57;cid=testClusterID;nsid=1611345649;c=1689426955566), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 13:15:56,384 DEBUG [Listener at localhost/41271] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248 2023-07-15 13:15:56,387 INFO [Listener at localhost/41271] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/cluster_a8f5af24-9fe6-0cda-6638-0e398d161cda/zookeeper_0, clientPort=62025, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/cluster_a8f5af24-9fe6-0cda-6638-0e398d161cda/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/cluster_a8f5af24-9fe6-0cda-6638-0e398d161cda/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-15 13:15:56,388 INFO [Listener at localhost/41271] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62025 2023-07-15 13:15:56,389 INFO [Listener at localhost/41271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:56,390 INFO [Listener at localhost/41271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:56,413 INFO [Listener at localhost/41271] util.FSUtils(471): Created version file at hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69 with version=8 2023-07-15 13:15:56,413 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/hbase-staging 2023-07-15 13:15:56,415 DEBUG [Listener at localhost/41271] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-15 13:15:56,415 DEBUG [Listener at localhost/41271] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-15 13:15:56,415 DEBUG [Listener at localhost/41271] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-15 13:15:56,415 DEBUG [Listener at localhost/41271] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-15 13:15:56,416 INFO [Listener at localhost/41271] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 13:15:56,416 INFO [Listener at localhost/41271] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:56,417 INFO [Listener at localhost/41271] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:56,417 INFO [Listener at localhost/41271] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 13:15:56,417 INFO [Listener at localhost/41271] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:56,417 INFO [Listener at localhost/41271] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 13:15:56,417 INFO [Listener at localhost/41271] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 13:15:56,419 INFO [Listener at localhost/41271] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44709 2023-07-15 13:15:56,420 INFO [Listener at localhost/41271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:56,421 INFO [Listener at localhost/41271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:56,422 INFO [Listener at localhost/41271] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44709 connecting to ZooKeeper ensemble=127.0.0.1:62025 2023-07-15 13:15:56,434 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:447090x0, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 13:15:56,435 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44709-0x101692013650000 connected 2023-07-15 13:15:56,460 DEBUG [Listener at localhost/41271] zookeeper.ZKUtil(164): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 13:15:56,461 DEBUG [Listener at localhost/41271] zookeeper.ZKUtil(164): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:15:56,461 DEBUG [Listener at localhost/41271] zookeeper.ZKUtil(164): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 13:15:56,466 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44709 2023-07-15 13:15:56,467 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44709 2023-07-15 13:15:56,467 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44709 2023-07-15 13:15:56,470 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44709 2023-07-15 13:15:56,470 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44709 2023-07-15 13:15:56,472 INFO [Listener at localhost/41271] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 13:15:56,473 INFO [Listener at localhost/41271] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 13:15:56,473 INFO [Listener at localhost/41271] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 13:15:56,473 INFO [Listener at localhost/41271] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-15 13:15:56,473 INFO [Listener at localhost/41271] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 13:15:56,473 INFO [Listener at localhost/41271] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 13:15:56,473 INFO [Listener at localhost/41271] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 13:15:56,474 INFO [Listener at localhost/41271] http.HttpServer(1146): Jetty bound to port 35639 2023-07-15 13:15:56,474 INFO [Listener at localhost/41271] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 13:15:56,476 INFO [Listener at localhost/41271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:56,477 INFO [Listener at localhost/41271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@78412264{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/hadoop.log.dir/,AVAILABLE} 2023-07-15 13:15:56,477 INFO [Listener at localhost/41271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:56,477 INFO [Listener at localhost/41271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@78e67007{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-15 13:15:56,592 INFO [Listener at localhost/41271] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 13:15:56,593 INFO [Listener at localhost/41271] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 13:15:56,593 INFO [Listener at localhost/41271] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 13:15:56,594 INFO [Listener at localhost/41271] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-15 13:15:56,595 INFO [Listener at localhost/41271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:56,596 INFO [Listener at localhost/41271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@41313141{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/java.io.tmpdir/jetty-0_0_0_0-35639-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7642164493112026923/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-15 13:15:56,597 INFO [Listener at localhost/41271] server.AbstractConnector(333): Started ServerConnector@1e331607{HTTP/1.1, (http/1.1)}{0.0.0.0:35639} 2023-07-15 13:15:56,598 INFO [Listener at localhost/41271] server.Server(415): Started @38424ms 2023-07-15 13:15:56,598 INFO [Listener at localhost/41271] master.HMaster(444): hbase.rootdir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69, hbase.cluster.distributed=false 2023-07-15 13:15:56,614 INFO [Listener at localhost/41271] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 13:15:56,614 INFO [Listener at localhost/41271] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:56,614 INFO [Listener at localhost/41271] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:56,614 INFO [Listener at localhost/41271] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 13:15:56,614 INFO [Listener at localhost/41271] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:56,614 INFO [Listener at localhost/41271] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 13:15:56,614 INFO [Listener at localhost/41271] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 13:15:56,615 INFO [Listener at localhost/41271] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46535 2023-07-15 13:15:56,615 INFO [Listener at localhost/41271] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 13:15:56,617 DEBUG [Listener at localhost/41271] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 13:15:56,617 INFO [Listener at localhost/41271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:56,619 INFO [Listener at localhost/41271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:56,620 INFO [Listener at localhost/41271] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46535 connecting to ZooKeeper ensemble=127.0.0.1:62025 2023-07-15 13:15:56,624 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:465350x0, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 13:15:56,625 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46535-0x101692013650001 connected 2023-07-15 13:15:56,625 DEBUG [Listener at localhost/41271] zookeeper.ZKUtil(164): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 13:15:56,626 DEBUG [Listener at localhost/41271] zookeeper.ZKUtil(164): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:15:56,626 DEBUG [Listener at localhost/41271] zookeeper.ZKUtil(164): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 13:15:56,630 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46535 2023-07-15 13:15:56,631 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46535 2023-07-15 13:15:56,631 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46535 2023-07-15 13:15:56,631 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46535 2023-07-15 13:15:56,631 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46535 2023-07-15 13:15:56,633 INFO [Listener at localhost/41271] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 13:15:56,633 INFO [Listener at localhost/41271] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 13:15:56,634 INFO [Listener at localhost/41271] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 13:15:56,634 INFO [Listener at localhost/41271] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 13:15:56,634 INFO [Listener at localhost/41271] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 13:15:56,634 INFO [Listener at localhost/41271] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 13:15:56,634 INFO [Listener at localhost/41271] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 13:15:56,635 INFO [Listener at localhost/41271] http.HttpServer(1146): Jetty bound to port 34909 2023-07-15 13:15:56,635 INFO [Listener at localhost/41271] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 13:15:56,639 INFO [Listener at localhost/41271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:56,639 INFO [Listener at localhost/41271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@150bbc40{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/hadoop.log.dir/,AVAILABLE} 2023-07-15 13:15:56,639 INFO [Listener at localhost/41271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:56,640 INFO [Listener at localhost/41271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@58399b0d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-15 13:15:56,758 INFO [Listener at localhost/41271] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 13:15:56,759 INFO [Listener at localhost/41271] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 13:15:56,759 INFO [Listener at localhost/41271] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 13:15:56,760 INFO [Listener at localhost/41271] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-15 13:15:56,761 INFO [Listener at localhost/41271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:56,762 INFO [Listener at localhost/41271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@43a76ec8{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/java.io.tmpdir/jetty-0_0_0_0-34909-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4735311206230578952/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:15:56,763 INFO [Listener at localhost/41271] server.AbstractConnector(333): Started ServerConnector@71cf3187{HTTP/1.1, (http/1.1)}{0.0.0.0:34909} 2023-07-15 13:15:56,764 INFO [Listener at localhost/41271] server.Server(415): Started @38590ms 2023-07-15 13:15:56,777 INFO [Listener at localhost/41271] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 13:15:56,777 INFO [Listener at localhost/41271] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:56,777 INFO [Listener at localhost/41271] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:56,777 INFO [Listener at localhost/41271] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 13:15:56,777 INFO [Listener at localhost/41271] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:56,777 INFO [Listener at localhost/41271] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 13:15:56,777 INFO [Listener at localhost/41271] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 13:15:56,778 INFO [Listener at localhost/41271] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40619 2023-07-15 13:15:56,778 INFO [Listener at localhost/41271] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 13:15:56,780 DEBUG [Listener at localhost/41271] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 13:15:56,780 INFO [Listener at localhost/41271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:56,781 INFO [Listener at localhost/41271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:56,782 INFO [Listener at localhost/41271] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40619 connecting to ZooKeeper ensemble=127.0.0.1:62025 2023-07-15 13:15:56,786 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:406190x0, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 13:15:56,787 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40619-0x101692013650002 connected 2023-07-15 13:15:56,787 DEBUG [Listener at localhost/41271] zookeeper.ZKUtil(164): regionserver:40619-0x101692013650002, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 13:15:56,788 DEBUG [Listener at localhost/41271] zookeeper.ZKUtil(164): regionserver:40619-0x101692013650002, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:15:56,788 DEBUG [Listener at localhost/41271] zookeeper.ZKUtil(164): regionserver:40619-0x101692013650002, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 13:15:56,789 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40619 2023-07-15 13:15:56,790 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40619 2023-07-15 13:15:56,791 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40619 2023-07-15 13:15:56,791 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40619 2023-07-15 13:15:56,791 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40619 2023-07-15 13:15:56,793 INFO [Listener at localhost/41271] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 13:15:56,793 INFO [Listener at localhost/41271] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 13:15:56,793 INFO [Listener at localhost/41271] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 13:15:56,793 INFO [Listener at localhost/41271] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 13:15:56,794 INFO [Listener at localhost/41271] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 13:15:56,794 INFO [Listener at localhost/41271] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 13:15:56,794 INFO [Listener at localhost/41271] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 13:15:56,794 INFO [Listener at localhost/41271] http.HttpServer(1146): Jetty bound to port 42839 2023-07-15 13:15:56,794 INFO [Listener at localhost/41271] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 13:15:56,799 INFO [Listener at localhost/41271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:56,799 INFO [Listener at localhost/41271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@622bd715{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/hadoop.log.dir/,AVAILABLE} 2023-07-15 13:15:56,800 INFO [Listener at localhost/41271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:56,800 INFO [Listener at localhost/41271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2f107ff7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-15 13:15:56,913 INFO [Listener at localhost/41271] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 13:15:56,914 INFO [Listener at localhost/41271] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 13:15:56,915 INFO [Listener at localhost/41271] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 13:15:56,915 INFO [Listener at localhost/41271] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-15 13:15:56,916 INFO [Listener at localhost/41271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:56,917 INFO [Listener at localhost/41271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@529e56fb{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/java.io.tmpdir/jetty-0_0_0_0-42839-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3185667518907893287/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:15:56,918 INFO [Listener at localhost/41271] server.AbstractConnector(333): Started ServerConnector@78bb3eeb{HTTP/1.1, (http/1.1)}{0.0.0.0:42839} 2023-07-15 13:15:56,918 INFO [Listener at localhost/41271] server.Server(415): Started @38745ms 2023-07-15 13:15:56,930 INFO [Listener at localhost/41271] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 13:15:56,930 INFO [Listener at localhost/41271] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:56,930 INFO [Listener at localhost/41271] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:56,930 INFO [Listener at localhost/41271] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 13:15:56,930 INFO [Listener at localhost/41271] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:15:56,930 INFO [Listener at localhost/41271] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 13:15:56,931 INFO [Listener at localhost/41271] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 13:15:56,931 INFO [Listener at localhost/41271] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35105 2023-07-15 13:15:56,932 INFO [Listener at localhost/41271] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 13:15:56,936 DEBUG [Listener at localhost/41271] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 13:15:56,936 INFO [Listener at localhost/41271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:56,937 INFO [Listener at localhost/41271] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:56,938 INFO [Listener at localhost/41271] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35105 connecting to ZooKeeper ensemble=127.0.0.1:62025 2023-07-15 13:15:56,943 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:351050x0, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 13:15:56,945 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35105-0x101692013650003 connected 2023-07-15 13:15:56,945 DEBUG [Listener at localhost/41271] zookeeper.ZKUtil(164): regionserver:35105-0x101692013650003, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 13:15:56,946 DEBUG [Listener at localhost/41271] zookeeper.ZKUtil(164): regionserver:35105-0x101692013650003, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:15:56,947 DEBUG [Listener at localhost/41271] zookeeper.ZKUtil(164): regionserver:35105-0x101692013650003, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 13:15:56,952 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35105 2023-07-15 13:15:56,952 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35105 2023-07-15 13:15:56,958 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35105 2023-07-15 13:15:56,963 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35105 2023-07-15 13:15:56,963 DEBUG [Listener at localhost/41271] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35105 2023-07-15 13:15:56,965 INFO [Listener at localhost/41271] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 13:15:56,966 INFO [Listener at localhost/41271] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 13:15:56,966 INFO [Listener at localhost/41271] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 13:15:56,966 INFO [Listener at localhost/41271] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 13:15:56,967 INFO [Listener at localhost/41271] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 13:15:56,967 INFO [Listener at localhost/41271] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 13:15:56,967 INFO [Listener at localhost/41271] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 13:15:56,968 INFO [Listener at localhost/41271] http.HttpServer(1146): Jetty bound to port 36459 2023-07-15 13:15:56,968 INFO [Listener at localhost/41271] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 13:15:56,971 INFO [Listener at localhost/41271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:56,972 INFO [Listener at localhost/41271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@31f6ff8a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/hadoop.log.dir/,AVAILABLE} 2023-07-15 13:15:56,972 INFO [Listener at localhost/41271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:56,972 INFO [Listener at localhost/41271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7596ce29{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-15 13:15:57,109 INFO [Listener at localhost/41271] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 13:15:57,110 INFO [Listener at localhost/41271] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 13:15:57,110 INFO [Listener at localhost/41271] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 13:15:57,110 INFO [Listener at localhost/41271] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-15 13:15:57,111 INFO [Listener at localhost/41271] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:15:57,112 INFO [Listener at localhost/41271] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@11e3dcbb{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/java.io.tmpdir/jetty-0_0_0_0-36459-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5548909995247992567/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:15:57,114 INFO [Listener at localhost/41271] server.AbstractConnector(333): Started ServerConnector@25f74100{HTTP/1.1, (http/1.1)}{0.0.0.0:36459} 2023-07-15 13:15:57,114 INFO [Listener at localhost/41271] server.Server(415): Started @38941ms 2023-07-15 13:15:57,116 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 13:15:57,123 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@736543ff{HTTP/1.1, (http/1.1)}{0.0.0.0:38101} 2023-07-15 13:15:57,123 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @38950ms 2023-07-15 13:15:57,123 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,44709,1689426956415 2023-07-15 13:15:57,125 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-15 13:15:57,125 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,44709,1689426956415 2023-07-15 13:15:57,127 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:40619-0x101692013650002, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 13:15:57,127 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 13:15:57,127 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:35105-0x101692013650003, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 13:15:57,127 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 13:15:57,128 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:57,130 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-15 13:15:57,131 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,44709,1689426956415 from backup master directory 2023-07-15 13:15:57,131 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-15 13:15:57,132 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,44709,1689426956415 2023-07-15 13:15:57,132 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-15 13:15:57,132 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 13:15:57,132 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,44709,1689426956415 2023-07-15 13:15:57,155 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/hbase.id with ID: 860c36f5-4af7-45db-9a9c-1e078cfdf87c 2023-07-15 13:15:57,169 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:15:57,171 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:57,187 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3ac278a8 to 127.0.0.1:62025 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:15:57,193 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@8237664, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:15:57,193 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:57,194 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-15 13:15:57,194 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:15:57,196 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/MasterData/data/master/store-tmp 2023-07-15 13:15:57,210 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:57,210 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-15 13:15:57,210 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:15:57,210 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:15:57,210 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-15 13:15:57,210 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:15:57,210 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:15:57,210 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 13:15:57,211 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/MasterData/WALs/jenkins-hbase4.apache.org,44709,1689426956415 2023-07-15 13:15:57,214 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44709%2C1689426956415, suffix=, logDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/MasterData/WALs/jenkins-hbase4.apache.org,44709,1689426956415, archiveDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/MasterData/oldWALs, maxLogs=10 2023-07-15 13:15:57,230 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41675,DS-9244a2bc-29cf-4d1b-bfb6-235f21ada008,DISK] 2023-07-15 13:15:57,231 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33371,DS-caf7a4b7-1eac-4f10-81f1-d45ee148d316,DISK] 2023-07-15 13:15:57,231 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42579,DS-ba4c84fb-99c3-41fc-a416-ef34c0e25e11,DISK] 2023-07-15 13:15:57,238 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/MasterData/WALs/jenkins-hbase4.apache.org,44709,1689426956415/jenkins-hbase4.apache.org%2C44709%2C1689426956415.1689426957214 2023-07-15 13:15:57,238 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41675,DS-9244a2bc-29cf-4d1b-bfb6-235f21ada008,DISK], DatanodeInfoWithStorage[127.0.0.1:42579,DS-ba4c84fb-99c3-41fc-a416-ef34c0e25e11,DISK], DatanodeInfoWithStorage[127.0.0.1:33371,DS-caf7a4b7-1eac-4f10-81f1-d45ee148d316,DISK]] 2023-07-15 13:15:57,238 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:57,238 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:57,238 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:15:57,238 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:15:57,242 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:15:57,244 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-15 13:15:57,244 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-15 13:15:57,244 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:57,245 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:15:57,245 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:15:57,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:15:57,251 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:57,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11959234400, jitterRate=0.11379049718379974}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:57,253 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 13:15:57,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-15 13:15:57,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-15 13:15:57,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-15 13:15:57,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-15 13:15:57,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-15 13:15:57,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-15 13:15:57,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-15 13:15:57,262 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-15 13:15:57,263 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-15 13:15:57,264 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-15 13:15:57,264 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-15 13:15:57,264 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-15 13:15:57,269 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:57,269 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-15 13:15:57,269 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-15 13:15:57,270 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-15 13:15:57,272 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:40619-0x101692013650002, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 13:15:57,272 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 13:15:57,272 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:35105-0x101692013650003, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 13:15:57,272 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 13:15:57,272 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:57,272 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,44709,1689426956415, sessionid=0x101692013650000, setting cluster-up flag (Was=false) 2023-07-15 13:15:57,277 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:57,283 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-15 13:15:57,284 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44709,1689426956415 2023-07-15 13:15:57,288 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:57,292 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-15 13:15:57,293 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44709,1689426956415 2023-07-15 13:15:57,293 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.hbase-snapshot/.tmp 2023-07-15 13:15:57,296 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-15 13:15:57,296 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-15 13:15:57,297 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-15 13:15:57,297 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44709,1689426956415] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 13:15:57,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-15 13:15:57,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-15 13:15:57,299 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-15 13:15:57,314 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-15 13:15:57,314 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-15 13:15:57,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-15 13:15:57,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-15 13:15:57,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 13:15:57,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 13:15:57,316 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 13:15:57,316 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 13:15:57,316 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-15 13:15:57,316 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,316 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 13:15:57,316 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,316 INFO [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer(951): ClusterId : 860c36f5-4af7-45db-9a9c-1e078cfdf87c 2023-07-15 13:15:57,316 INFO [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer(951): ClusterId : 860c36f5-4af7-45db-9a9c-1e078cfdf87c 2023-07-15 13:15:57,321 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689426987321 2023-07-15 13:15:57,323 DEBUG [RS:1;jenkins-hbase4:40619] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 13:15:57,323 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-15 13:15:57,323 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-15 13:15:57,321 DEBUG [RS:0;jenkins-hbase4:46535] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 13:15:57,318 INFO [RS:2;jenkins-hbase4:35105] regionserver.HRegionServer(951): ClusterId : 860c36f5-4af7-45db-9a9c-1e078cfdf87c 2023-07-15 13:15:57,324 DEBUG [RS:2;jenkins-hbase4:35105] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 13:15:57,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-15 13:15:57,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-15 13:15:57,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-15 13:15:57,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-15 13:15:57,325 DEBUG [PEWorker-2] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-15 13:15:57,325 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,325 INFO [PEWorker-2] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-15 13:15:57,326 INFO [PEWorker-2] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:57,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-15 13:15:57,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-15 13:15:57,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-15 13:15:57,328 DEBUG [RS:1;jenkins-hbase4:40619] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 13:15:57,328 DEBUG [RS:1;jenkins-hbase4:40619] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 13:15:57,328 DEBUG [RS:0;jenkins-hbase4:46535] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 13:15:57,328 DEBUG [RS:0;jenkins-hbase4:46535] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 13:15:57,328 DEBUG [RS:2;jenkins-hbase4:35105] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 13:15:57,328 DEBUG [RS:2;jenkins-hbase4:35105] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 13:15:57,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-15 13:15:57,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-15 13:15:57,329 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689426957328,5,FailOnTimeoutGroup] 2023-07-15 13:15:57,330 DEBUG [RS:1;jenkins-hbase4:40619] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 13:15:57,330 DEBUG [RS:2;jenkins-hbase4:35105] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 13:15:57,331 DEBUG [RS:0;jenkins-hbase4:46535] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 13:15:57,335 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689426957329,5,FailOnTimeoutGroup] 2023-07-15 13:15:57,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-15 13:15:57,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,336 DEBUG [RS:1;jenkins-hbase4:40619] zookeeper.ReadOnlyZKClient(139): Connect 0x7200cef4 to 127.0.0.1:62025 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:15:57,336 DEBUG [RS:0;jenkins-hbase4:46535] zookeeper.ReadOnlyZKClient(139): Connect 0x32f9b515 to 127.0.0.1:62025 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:15:57,336 DEBUG [RS:2;jenkins-hbase4:35105] zookeeper.ReadOnlyZKClient(139): Connect 0x702a8227 to 127.0.0.1:62025 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:15:57,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,352 DEBUG [RS:2;jenkins-hbase4:35105] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7eed7252, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:15:57,352 DEBUG [RS:1;jenkins-hbase4:40619] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7cc73ec4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:15:57,353 DEBUG [RS:0;jenkins-hbase4:46535] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3801de65, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:15:57,353 DEBUG [RS:2;jenkins-hbase4:35105] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3fd07869, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 13:15:57,353 DEBUG [RS:0;jenkins-hbase4:46535] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d034701, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 13:15:57,353 DEBUG [RS:1;jenkins-hbase4:40619] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3dfa34ff, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 13:15:57,359 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:57,360 INFO [PEWorker-2] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:57,360 INFO [PEWorker-2] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69 2023-07-15 13:15:57,363 DEBUG [RS:1;jenkins-hbase4:40619] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:40619 2023-07-15 13:15:57,363 INFO [RS:1;jenkins-hbase4:40619] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 13:15:57,363 INFO [RS:1;jenkins-hbase4:40619] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 13:15:57,363 DEBUG [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 13:15:57,364 DEBUG [RS:0;jenkins-hbase4:46535] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46535 2023-07-15 13:15:57,364 INFO [RS:0;jenkins-hbase4:46535] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 13:15:57,364 INFO [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44709,1689426956415 with isa=jenkins-hbase4.apache.org/172.31.14.131:40619, startcode=1689426956776 2023-07-15 13:15:57,364 INFO [RS:0;jenkins-hbase4:46535] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 13:15:57,364 DEBUG [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 13:15:57,364 DEBUG [RS:1;jenkins-hbase4:40619] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 13:15:57,364 INFO [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44709,1689426956415 with isa=jenkins-hbase4.apache.org/172.31.14.131:46535, startcode=1689426956613 2023-07-15 13:15:57,365 DEBUG [RS:0;jenkins-hbase4:46535] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 13:15:57,366 DEBUG [RS:2;jenkins-hbase4:35105] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:35105 2023-07-15 13:15:57,366 INFO [RS:2;jenkins-hbase4:35105] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 13:15:57,367 INFO [RS:2;jenkins-hbase4:35105] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 13:15:57,367 DEBUG [RS:2;jenkins-hbase4:35105] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 13:15:57,367 INFO [RS:2;jenkins-hbase4:35105] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44709,1689426956415 with isa=jenkins-hbase4.apache.org/172.31.14.131:35105, startcode=1689426956929 2023-07-15 13:15:57,367 DEBUG [RS:2;jenkins-hbase4:35105] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 13:15:57,371 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41575, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 13:15:57,371 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38595, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 13:15:57,371 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43779, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 13:15:57,373 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44709] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40619,1689426956776 2023-07-15 13:15:57,373 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44709,1689426956415] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 13:15:57,374 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44709,1689426956415] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-15 13:15:57,374 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44709] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35105,1689426956929 2023-07-15 13:15:57,374 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44709,1689426956415] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 13:15:57,374 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44709] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46535,1689426956613 2023-07-15 13:15:57,374 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44709,1689426956415] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-15 13:15:57,375 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44709,1689426956415] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 13:15:57,375 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44709,1689426956415] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-15 13:15:57,375 DEBUG [RS:2;jenkins-hbase4:35105] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69 2023-07-15 13:15:57,375 DEBUG [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69 2023-07-15 13:15:57,375 DEBUG [RS:2;jenkins-hbase4:35105] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34061 2023-07-15 13:15:57,375 DEBUG [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34061 2023-07-15 13:15:57,375 DEBUG [RS:2;jenkins-hbase4:35105] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35639 2023-07-15 13:15:57,375 DEBUG [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69 2023-07-15 13:15:57,375 DEBUG [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35639 2023-07-15 13:15:57,375 DEBUG [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34061 2023-07-15 13:15:57,375 DEBUG [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35639 2023-07-15 13:15:57,376 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:57,382 DEBUG [RS:2;jenkins-hbase4:35105] zookeeper.ZKUtil(162): regionserver:35105-0x101692013650003, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35105,1689426956929 2023-07-15 13:15:57,382 WARN [RS:2;jenkins-hbase4:35105] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 13:15:57,382 INFO [RS:2;jenkins-hbase4:35105] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:15:57,382 DEBUG [RS:2;jenkins-hbase4:35105] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/WALs/jenkins-hbase4.apache.org,35105,1689426956929 2023-07-15 13:15:57,382 DEBUG [RS:1;jenkins-hbase4:40619] zookeeper.ZKUtil(162): regionserver:40619-0x101692013650002, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40619,1689426956776 2023-07-15 13:15:57,383 WARN [RS:1;jenkins-hbase4:40619] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 13:15:57,383 INFO [RS:1;jenkins-hbase4:40619] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:15:57,383 DEBUG [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/WALs/jenkins-hbase4.apache.org,40619,1689426956776 2023-07-15 13:15:57,383 DEBUG [RS:0;jenkins-hbase4:46535] zookeeper.ZKUtil(162): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46535,1689426956613 2023-07-15 13:15:57,383 WARN [RS:0;jenkins-hbase4:46535] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 13:15:57,383 INFO [RS:0;jenkins-hbase4:46535] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:15:57,383 DEBUG [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/WALs/jenkins-hbase4.apache.org,46535,1689426956613 2023-07-15 13:15:57,387 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40619,1689426956776] 2023-07-15 13:15:57,387 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35105,1689426956929] 2023-07-15 13:15:57,387 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46535,1689426956613] 2023-07-15 13:15:57,405 DEBUG [PEWorker-2] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:57,405 DEBUG [RS:2;jenkins-hbase4:35105] zookeeper.ZKUtil(162): regionserver:35105-0x101692013650003, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40619,1689426956776 2023-07-15 13:15:57,406 DEBUG [RS:2;jenkins-hbase4:35105] zookeeper.ZKUtil(162): regionserver:35105-0x101692013650003, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35105,1689426956929 2023-07-15 13:15:57,407 DEBUG [RS:0;jenkins-hbase4:46535] zookeeper.ZKUtil(162): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40619,1689426956776 2023-07-15 13:15:57,407 DEBUG [RS:2;jenkins-hbase4:35105] zookeeper.ZKUtil(162): regionserver:35105-0x101692013650003, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46535,1689426956613 2023-07-15 13:15:57,407 DEBUG [RS:1;jenkins-hbase4:40619] zookeeper.ZKUtil(162): regionserver:40619-0x101692013650002, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40619,1689426956776 2023-07-15 13:15:57,407 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-15 13:15:57,407 DEBUG [RS:0;jenkins-hbase4:46535] zookeeper.ZKUtil(162): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35105,1689426956929 2023-07-15 13:15:57,408 DEBUG [RS:1;jenkins-hbase4:40619] zookeeper.ZKUtil(162): regionserver:40619-0x101692013650002, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35105,1689426956929 2023-07-15 13:15:57,408 DEBUG [RS:0;jenkins-hbase4:46535] zookeeper.ZKUtil(162): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46535,1689426956613 2023-07-15 13:15:57,408 DEBUG [RS:2;jenkins-hbase4:35105] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 13:15:57,408 DEBUG [RS:1;jenkins-hbase4:40619] zookeeper.ZKUtil(162): regionserver:40619-0x101692013650002, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46535,1689426956613 2023-07-15 13:15:57,408 INFO [RS:2;jenkins-hbase4:35105] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 13:15:57,409 DEBUG [RS:0;jenkins-hbase4:46535] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 13:15:57,409 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/info 2023-07-15 13:15:57,410 INFO [RS:0;jenkins-hbase4:46535] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 13:15:57,410 INFO [RS:2;jenkins-hbase4:35105] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 13:15:57,410 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-15 13:15:57,410 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:57,410 DEBUG [RS:1;jenkins-hbase4:40619] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 13:15:57,411 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-15 13:15:57,411 INFO [RS:2;jenkins-hbase4:35105] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 13:15:57,411 INFO [RS:2;jenkins-hbase4:35105] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,411 INFO [RS:1;jenkins-hbase4:40619] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 13:15:57,411 INFO [RS:2;jenkins-hbase4:35105] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 13:15:57,416 INFO [RS:0;jenkins-hbase4:46535] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 13:15:57,416 INFO [RS:0;jenkins-hbase4:46535] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 13:15:57,416 INFO [RS:0;jenkins-hbase4:46535] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,416 INFO [RS:2;jenkins-hbase4:35105] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,416 INFO [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 13:15:57,416 DEBUG [RS:2;jenkins-hbase4:35105] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,417 DEBUG [RS:2;jenkins-hbase4:35105] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,417 DEBUG [RS:2;jenkins-hbase4:35105] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,417 DEBUG [RS:2;jenkins-hbase4:35105] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,417 DEBUG [RS:2;jenkins-hbase4:35105] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,417 DEBUG [RS:2;jenkins-hbase4:35105] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 13:15:57,418 DEBUG [RS:2;jenkins-hbase4:35105] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,418 INFO [RS:1;jenkins-hbase4:40619] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 13:15:57,418 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/rep_barrier 2023-07-15 13:15:57,418 DEBUG [RS:2;jenkins-hbase4:35105] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,418 DEBUG [RS:2;jenkins-hbase4:35105] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,418 INFO [RS:1;jenkins-hbase4:40619] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 13:15:57,418 INFO [RS:1;jenkins-hbase4:40619] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,418 INFO [RS:0;jenkins-hbase4:46535] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,418 DEBUG [RS:2;jenkins-hbase4:35105] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,419 DEBUG [RS:0;jenkins-hbase4:46535] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,418 INFO [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 13:15:57,419 DEBUG [RS:0;jenkins-hbase4:46535] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,418 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-15 13:15:57,419 DEBUG [RS:0;jenkins-hbase4:46535] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,420 DEBUG [RS:0;jenkins-hbase4:46535] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,420 DEBUG [RS:0;jenkins-hbase4:46535] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,420 DEBUG [RS:0;jenkins-hbase4:46535] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 13:15:57,420 DEBUG [RS:0;jenkins-hbase4:46535] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,420 DEBUG [RS:0;jenkins-hbase4:46535] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,420 DEBUG [RS:0;jenkins-hbase4:46535] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,420 DEBUG [RS:0;jenkins-hbase4:46535] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,420 INFO [RS:2;jenkins-hbase4:35105] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,420 INFO [RS:2;jenkins-hbase4:35105] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,420 INFO [RS:2;jenkins-hbase4:35105] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,420 INFO [RS:2;jenkins-hbase4:35105] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,420 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:57,420 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-15 13:15:57,422 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/table 2023-07-15 13:15:57,422 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-15 13:15:57,423 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:57,427 INFO [RS:1;jenkins-hbase4:40619] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,427 INFO [RS:0;jenkins-hbase4:46535] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,427 DEBUG [RS:1;jenkins-hbase4:40619] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,427 INFO [RS:0;jenkins-hbase4:46535] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,427 DEBUG [RS:1;jenkins-hbase4:40619] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,427 INFO [RS:0;jenkins-hbase4:46535] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,427 DEBUG [RS:1;jenkins-hbase4:40619] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,428 INFO [RS:0;jenkins-hbase4:46535] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,428 DEBUG [RS:1;jenkins-hbase4:40619] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,428 DEBUG [RS:1;jenkins-hbase4:40619] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,428 DEBUG [PEWorker-2] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740 2023-07-15 13:15:57,428 DEBUG [RS:1;jenkins-hbase4:40619] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 13:15:57,428 DEBUG [RS:1;jenkins-hbase4:40619] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,428 DEBUG [RS:1;jenkins-hbase4:40619] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,428 DEBUG [RS:1;jenkins-hbase4:40619] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,428 DEBUG [RS:1;jenkins-hbase4:40619] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:15:57,428 DEBUG [PEWorker-2] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740 2023-07-15 13:15:57,430 DEBUG [PEWorker-2] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-15 13:15:57,432 DEBUG [PEWorker-2] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-15 13:15:57,440 INFO [RS:1;jenkins-hbase4:40619] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,440 DEBUG [PEWorker-2] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:57,440 INFO [RS:1;jenkins-hbase4:40619] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,440 INFO [RS:1;jenkins-hbase4:40619] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,440 INFO [RS:1;jenkins-hbase4:40619] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,441 INFO [PEWorker-2] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10042726400, jitterRate=-0.0646982192993164}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-15 13:15:57,441 DEBUG [PEWorker-2] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-15 13:15:57,441 DEBUG [PEWorker-2] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-15 13:15:57,441 INFO [PEWorker-2] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-15 13:15:57,441 DEBUG [PEWorker-2] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-15 13:15:57,441 DEBUG [PEWorker-2] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-15 13:15:57,441 DEBUG [PEWorker-2] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-15 13:15:57,441 INFO [RS:2;jenkins-hbase4:35105] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 13:15:57,441 INFO [RS:2;jenkins-hbase4:35105] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35105,1689426956929-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,447 INFO [PEWorker-2] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-15 13:15:57,447 DEBUG [PEWorker-2] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-15 13:15:57,448 DEBUG [PEWorker-2] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-15 13:15:57,448 INFO [PEWorker-2] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-15 13:15:57,448 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-15 13:15:57,449 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-15 13:15:57,452 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-15 13:15:57,453 INFO [RS:0;jenkins-hbase4:46535] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 13:15:57,453 INFO [RS:0;jenkins-hbase4:46535] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46535,1689426956613-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,458 INFO [RS:1;jenkins-hbase4:40619] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 13:15:57,459 INFO [RS:1;jenkins-hbase4:40619] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40619,1689426956776-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,462 INFO [RS:2;jenkins-hbase4:35105] regionserver.Replication(203): jenkins-hbase4.apache.org,35105,1689426956929 started 2023-07-15 13:15:57,463 INFO [RS:2;jenkins-hbase4:35105] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35105,1689426956929, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35105, sessionid=0x101692013650003 2023-07-15 13:15:57,463 DEBUG [RS:2;jenkins-hbase4:35105] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 13:15:57,463 DEBUG [RS:2;jenkins-hbase4:35105] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35105,1689426956929 2023-07-15 13:15:57,463 DEBUG [RS:2;jenkins-hbase4:35105] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35105,1689426956929' 2023-07-15 13:15:57,463 DEBUG [RS:2;jenkins-hbase4:35105] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 13:15:57,463 DEBUG [RS:2;jenkins-hbase4:35105] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 13:15:57,464 DEBUG [RS:2;jenkins-hbase4:35105] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 13:15:57,464 DEBUG [RS:2;jenkins-hbase4:35105] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 13:15:57,464 DEBUG [RS:2;jenkins-hbase4:35105] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35105,1689426956929 2023-07-15 13:15:57,464 DEBUG [RS:2;jenkins-hbase4:35105] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35105,1689426956929' 2023-07-15 13:15:57,464 DEBUG [RS:2;jenkins-hbase4:35105] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 13:15:57,464 DEBUG [RS:2;jenkins-hbase4:35105] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 13:15:57,465 DEBUG [RS:2;jenkins-hbase4:35105] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 13:15:57,465 INFO [RS:2;jenkins-hbase4:35105] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-15 13:15:57,467 INFO [RS:2;jenkins-hbase4:35105] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,467 INFO [RS:0;jenkins-hbase4:46535] regionserver.Replication(203): jenkins-hbase4.apache.org,46535,1689426956613 started 2023-07-15 13:15:57,467 INFO [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46535,1689426956613, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46535, sessionid=0x101692013650001 2023-07-15 13:15:57,468 DEBUG [RS:0;jenkins-hbase4:46535] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 13:15:57,468 DEBUG [RS:0;jenkins-hbase4:46535] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46535,1689426956613 2023-07-15 13:15:57,468 DEBUG [RS:0;jenkins-hbase4:46535] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46535,1689426956613' 2023-07-15 13:15:57,468 DEBUG [RS:0;jenkins-hbase4:46535] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 13:15:57,468 DEBUG [RS:2;jenkins-hbase4:35105] zookeeper.ZKUtil(398): regionserver:35105-0x101692013650003, quorum=127.0.0.1:62025, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-15 13:15:57,468 INFO [RS:2;jenkins-hbase4:35105] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-15 13:15:57,468 DEBUG [RS:0;jenkins-hbase4:46535] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 13:15:57,468 INFO [RS:2;jenkins-hbase4:35105] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,468 DEBUG [RS:0;jenkins-hbase4:46535] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 13:15:57,468 DEBUG [RS:0;jenkins-hbase4:46535] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 13:15:57,468 DEBUG [RS:0;jenkins-hbase4:46535] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46535,1689426956613 2023-07-15 13:15:57,468 DEBUG [RS:0;jenkins-hbase4:46535] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46535,1689426956613' 2023-07-15 13:15:57,469 DEBUG [RS:0;jenkins-hbase4:46535] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 13:15:57,469 INFO [RS:2;jenkins-hbase4:35105] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,469 DEBUG [RS:0;jenkins-hbase4:46535] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 13:15:57,469 DEBUG [RS:0;jenkins-hbase4:46535] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 13:15:57,469 INFO [RS:0;jenkins-hbase4:46535] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-15 13:15:57,469 INFO [RS:0;jenkins-hbase4:46535] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,470 DEBUG [RS:0;jenkins-hbase4:46535] zookeeper.ZKUtil(398): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-15 13:15:57,470 INFO [RS:0;jenkins-hbase4:46535] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-15 13:15:57,470 INFO [RS:0;jenkins-hbase4:46535] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,470 INFO [RS:0;jenkins-hbase4:46535] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,470 INFO [RS:1;jenkins-hbase4:40619] regionserver.Replication(203): jenkins-hbase4.apache.org,40619,1689426956776 started 2023-07-15 13:15:57,471 INFO [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40619,1689426956776, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40619, sessionid=0x101692013650002 2023-07-15 13:15:57,471 DEBUG [RS:1;jenkins-hbase4:40619] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 13:15:57,471 DEBUG [RS:1;jenkins-hbase4:40619] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40619,1689426956776 2023-07-15 13:15:57,471 DEBUG [RS:1;jenkins-hbase4:40619] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40619,1689426956776' 2023-07-15 13:15:57,471 DEBUG [RS:1;jenkins-hbase4:40619] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 13:15:57,471 DEBUG [RS:1;jenkins-hbase4:40619] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 13:15:57,471 DEBUG [RS:1;jenkins-hbase4:40619] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 13:15:57,471 DEBUG [RS:1;jenkins-hbase4:40619] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 13:15:57,471 DEBUG [RS:1;jenkins-hbase4:40619] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40619,1689426956776 2023-07-15 13:15:57,471 DEBUG [RS:1;jenkins-hbase4:40619] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40619,1689426956776' 2023-07-15 13:15:57,471 DEBUG [RS:1;jenkins-hbase4:40619] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 13:15:57,472 DEBUG [RS:1;jenkins-hbase4:40619] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 13:15:57,472 DEBUG [RS:1;jenkins-hbase4:40619] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 13:15:57,472 INFO [RS:1;jenkins-hbase4:40619] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-15 13:15:57,472 INFO [RS:1;jenkins-hbase4:40619] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,472 DEBUG [RS:1;jenkins-hbase4:40619] zookeeper.ZKUtil(398): regionserver:40619-0x101692013650002, quorum=127.0.0.1:62025, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-15 13:15:57,472 INFO [RS:1;jenkins-hbase4:40619] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-15 13:15:57,472 INFO [RS:1;jenkins-hbase4:40619] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,473 INFO [RS:1;jenkins-hbase4:40619] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,572 INFO [RS:2;jenkins-hbase4:35105] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35105%2C1689426956929, suffix=, logDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/WALs/jenkins-hbase4.apache.org,35105,1689426956929, archiveDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/oldWALs, maxLogs=32 2023-07-15 13:15:57,572 INFO [RS:0;jenkins-hbase4:46535] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46535%2C1689426956613, suffix=, logDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/WALs/jenkins-hbase4.apache.org,46535,1689426956613, archiveDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/oldWALs, maxLogs=32 2023-07-15 13:15:57,574 INFO [RS:1;jenkins-hbase4:40619] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40619%2C1689426956776, suffix=, logDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/WALs/jenkins-hbase4.apache.org,40619,1689426956776, archiveDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/oldWALs, maxLogs=32 2023-07-15 13:15:57,601 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33371,DS-caf7a4b7-1eac-4f10-81f1-d45ee148d316,DISK] 2023-07-15 13:15:57,601 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41675,DS-9244a2bc-29cf-4d1b-bfb6-235f21ada008,DISK] 2023-07-15 13:15:57,601 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42579,DS-ba4c84fb-99c3-41fc-a416-ef34c0e25e11,DISK] 2023-07-15 13:15:57,602 DEBUG [jenkins-hbase4:44709] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-15 13:15:57,602 DEBUG [jenkins-hbase4:44709] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:57,603 DEBUG [jenkins-hbase4:44709] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:57,603 DEBUG [jenkins-hbase4:44709] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:57,603 DEBUG [jenkins-hbase4:44709] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:57,603 DEBUG [jenkins-hbase4:44709] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:57,607 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46535,1689426956613, state=OPENING 2023-07-15 13:15:57,612 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33371,DS-caf7a4b7-1eac-4f10-81f1-d45ee148d316,DISK] 2023-07-15 13:15:57,612 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42579,DS-ba4c84fb-99c3-41fc-a416-ef34c0e25e11,DISK] 2023-07-15 13:15:57,615 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41675,DS-9244a2bc-29cf-4d1b-bfb6-235f21ada008,DISK] 2023-07-15 13:15:57,615 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-15 13:15:57,615 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42579,DS-ba4c84fb-99c3-41fc-a416-ef34c0e25e11,DISK] 2023-07-15 13:15:57,615 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41675,DS-9244a2bc-29cf-4d1b-bfb6-235f21ada008,DISK] 2023-07-15 13:15:57,616 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33371,DS-caf7a4b7-1eac-4f10-81f1-d45ee148d316,DISK] 2023-07-15 13:15:57,617 INFO [RS:0;jenkins-hbase4:46535] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/WALs/jenkins-hbase4.apache.org,46535,1689426956613/jenkins-hbase4.apache.org%2C46535%2C1689426956613.1689426957573 2023-07-15 13:15:57,618 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:57,619 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 13:15:57,619 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46535,1689426956613}] 2023-07-15 13:15:57,635 INFO [RS:1;jenkins-hbase4:40619] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/WALs/jenkins-hbase4.apache.org,40619,1689426956776/jenkins-hbase4.apache.org%2C40619%2C1689426956776.1689426957575 2023-07-15 13:15:57,639 DEBUG [RS:0;jenkins-hbase4:46535] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33371,DS-caf7a4b7-1eac-4f10-81f1-d45ee148d316,DISK], DatanodeInfoWithStorage[127.0.0.1:41675,DS-9244a2bc-29cf-4d1b-bfb6-235f21ada008,DISK], DatanodeInfoWithStorage[127.0.0.1:42579,DS-ba4c84fb-99c3-41fc-a416-ef34c0e25e11,DISK]] 2023-07-15 13:15:57,646 DEBUG [RS:1;jenkins-hbase4:40619] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41675,DS-9244a2bc-29cf-4d1b-bfb6-235f21ada008,DISK], DatanodeInfoWithStorage[127.0.0.1:42579,DS-ba4c84fb-99c3-41fc-a416-ef34c0e25e11,DISK], DatanodeInfoWithStorage[127.0.0.1:33371,DS-caf7a4b7-1eac-4f10-81f1-d45ee148d316,DISK]] 2023-07-15 13:15:57,649 INFO [RS:2;jenkins-hbase4:35105] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/WALs/jenkins-hbase4.apache.org,35105,1689426956929/jenkins-hbase4.apache.org%2C35105%2C1689426956929.1689426957573 2023-07-15 13:15:57,649 DEBUG [RS:2;jenkins-hbase4:35105] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41675,DS-9244a2bc-29cf-4d1b-bfb6-235f21ada008,DISK], DatanodeInfoWithStorage[127.0.0.1:33371,DS-caf7a4b7-1eac-4f10-81f1-d45ee148d316,DISK], DatanodeInfoWithStorage[127.0.0.1:42579,DS-ba4c84fb-99c3-41fc-a416-ef34c0e25e11,DISK]] 2023-07-15 13:15:57,800 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46535,1689426956613 2023-07-15 13:15:57,801 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 13:15:57,803 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42072, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 13:15:57,808 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-15 13:15:57,808 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:15:57,809 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46535%2C1689426956613.meta, suffix=.meta, logDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/WALs/jenkins-hbase4.apache.org,46535,1689426956613, archiveDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/oldWALs, maxLogs=32 2023-07-15 13:15:57,830 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41675,DS-9244a2bc-29cf-4d1b-bfb6-235f21ada008,DISK] 2023-07-15 13:15:57,830 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42579,DS-ba4c84fb-99c3-41fc-a416-ef34c0e25e11,DISK] 2023-07-15 13:15:57,831 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33371,DS-caf7a4b7-1eac-4f10-81f1-d45ee148d316,DISK] 2023-07-15 13:15:57,835 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/WALs/jenkins-hbase4.apache.org,46535,1689426956613/jenkins-hbase4.apache.org%2C46535%2C1689426956613.meta.1689426957810.meta 2023-07-15 13:15:57,838 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41675,DS-9244a2bc-29cf-4d1b-bfb6-235f21ada008,DISK], DatanodeInfoWithStorage[127.0.0.1:33371,DS-caf7a4b7-1eac-4f10-81f1-d45ee148d316,DISK], DatanodeInfoWithStorage[127.0.0.1:42579,DS-ba4c84fb-99c3-41fc-a416-ef34c0e25e11,DISK]] 2023-07-15 13:15:57,839 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:57,839 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-15 13:15:57,839 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-15 13:15:57,839 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-15 13:15:57,839 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-15 13:15:57,839 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:57,839 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-15 13:15:57,839 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-15 13:15:57,843 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-15 13:15:57,844 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/info 2023-07-15 13:15:57,844 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/info 2023-07-15 13:15:57,845 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-15 13:15:57,845 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:57,846 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-15 13:15:57,846 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/rep_barrier 2023-07-15 13:15:57,847 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/rep_barrier 2023-07-15 13:15:57,847 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-15 13:15:57,848 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:57,848 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-15 13:15:57,849 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/table 2023-07-15 13:15:57,849 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/table 2023-07-15 13:15:57,849 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-15 13:15:57,850 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:57,851 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740 2023-07-15 13:15:57,852 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740 2023-07-15 13:15:57,855 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-15 13:15:57,857 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-15 13:15:57,858 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11826946080, jitterRate=0.10147018730640411}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-15 13:15:57,858 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-15 13:15:57,859 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689426957800 2023-07-15 13:15:57,863 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-15 13:15:57,864 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-15 13:15:57,864 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46535,1689426956613, state=OPEN 2023-07-15 13:15:57,866 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-15 13:15:57,866 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 13:15:57,868 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-15 13:15:57,868 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46535,1689426956613 in 247 msec 2023-07-15 13:15:57,869 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-15 13:15:57,869 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 420 msec 2023-07-15 13:15:57,871 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 572 msec 2023-07-15 13:15:57,871 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689426957871, completionTime=-1 2023-07-15 13:15:57,871 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-15 13:15:57,871 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-15 13:15:57,874 DEBUG [hconnection-0x644d95c1-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:15:57,875 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42084, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:15:57,876 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-15 13:15:57,876 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689427017876 2023-07-15 13:15:57,876 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689427077876 2023-07-15 13:15:57,876 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-07-15 13:15:57,882 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44709,1689426956415-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44709,1689426956415-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44709,1689426956415-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:44709, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:57,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-15 13:15:57,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:57,884 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-15 13:15:57,888 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:15:57,888 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-15 13:15:57,889 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 13:15:57,891 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/hbase/namespace/7b7e014f227e2eaaf5fea6f62eef7442 2023-07-15 13:15:57,892 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/hbase/namespace/7b7e014f227e2eaaf5fea6f62eef7442 empty. 2023-07-15 13:15:57,893 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/hbase/namespace/7b7e014f227e2eaaf5fea6f62eef7442 2023-07-15 13:15:57,893 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-15 13:15:57,928 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:57,931 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7b7e014f227e2eaaf5fea6f62eef7442, NAME => 'hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp 2023-07-15 13:15:57,953 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44709,1689426956415] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:57,963 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44709,1689426956415] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-15 13:15:57,967 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:15:57,969 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 13:15:57,971 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/hbase/rsgroup/e34116b8e67adf943377ac8cc72f7314 2023-07-15 13:15:57,972 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/hbase/rsgroup/e34116b8e67adf943377ac8cc72f7314 empty. 2023-07-15 13:15:57,972 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/hbase/rsgroup/e34116b8e67adf943377ac8cc72f7314 2023-07-15 13:15:57,972 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-15 13:15:58,007 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:58,007 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 7b7e014f227e2eaaf5fea6f62eef7442, disabling compactions & flushes 2023-07-15 13:15:58,007 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442. 2023-07-15 13:15:58,007 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442. 2023-07-15 13:15:58,007 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442. after waiting 0 ms 2023-07-15 13:15:58,008 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442. 2023-07-15 13:15:58,008 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442. 2023-07-15 13:15:58,008 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 7b7e014f227e2eaaf5fea6f62eef7442: 2023-07-15 13:15:58,011 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 13:15:58,012 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689426958012"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426958012"}]},"ts":"1689426958012"} 2023-07-15 13:15:58,016 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 13:15:58,017 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 13:15:58,017 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426958017"}]},"ts":"1689426958017"} 2023-07-15 13:15:58,017 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:58,018 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-15 13:15:58,019 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => e34116b8e67adf943377ac8cc72f7314, NAME => 'hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp 2023-07-15 13:15:58,022 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:58,022 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:58,022 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:58,022 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:58,022 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:58,023 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7b7e014f227e2eaaf5fea6f62eef7442, ASSIGN}] 2023-07-15 13:15:58,024 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7b7e014f227e2eaaf5fea6f62eef7442, ASSIGN 2023-07-15 13:15:58,024 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=7b7e014f227e2eaaf5fea6f62eef7442, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40619,1689426956776; forceNewPlan=false, retain=false 2023-07-15 13:15:58,038 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:58,039 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing e34116b8e67adf943377ac8cc72f7314, disabling compactions & flushes 2023-07-15 13:15:58,039 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314. 2023-07-15 13:15:58,039 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314. 2023-07-15 13:15:58,039 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314. after waiting 0 ms 2023-07-15 13:15:58,039 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314. 2023-07-15 13:15:58,039 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314. 2023-07-15 13:15:58,039 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for e34116b8e67adf943377ac8cc72f7314: 2023-07-15 13:15:58,041 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 13:15:58,042 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689426958042"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426958042"}]},"ts":"1689426958042"} 2023-07-15 13:15:58,044 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 13:15:58,044 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 13:15:58,045 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426958044"}]},"ts":"1689426958044"} 2023-07-15 13:15:58,047 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-15 13:15:58,051 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:58,051 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:58,051 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:58,051 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:58,051 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:58,051 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e34116b8e67adf943377ac8cc72f7314, ASSIGN}] 2023-07-15 13:15:58,053 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e34116b8e67adf943377ac8cc72f7314, ASSIGN 2023-07-15 13:15:58,054 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=e34116b8e67adf943377ac8cc72f7314, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40619,1689426956776; forceNewPlan=false, retain=false 2023-07-15 13:15:58,054 INFO [jenkins-hbase4:44709] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-15 13:15:58,057 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=7b7e014f227e2eaaf5fea6f62eef7442, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40619,1689426956776 2023-07-15 13:15:58,057 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=e34116b8e67adf943377ac8cc72f7314, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40619,1689426956776 2023-07-15 13:15:58,057 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689426958056"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426958056"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426958056"}]},"ts":"1689426958056"} 2023-07-15 13:15:58,057 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689426958057"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426958057"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426958057"}]},"ts":"1689426958057"} 2023-07-15 13:15:58,058 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 7b7e014f227e2eaaf5fea6f62eef7442, server=jenkins-hbase4.apache.org,40619,1689426956776}] 2023-07-15 13:15:58,059 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure e34116b8e67adf943377ac8cc72f7314, server=jenkins-hbase4.apache.org,40619,1689426956776}] 2023-07-15 13:15:58,210 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40619,1689426956776 2023-07-15 13:15:58,211 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 13:15:58,213 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58862, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 13:15:58,219 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314. 2023-07-15 13:15:58,219 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e34116b8e67adf943377ac8cc72f7314, NAME => 'hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:58,219 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-15 13:15:58,219 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314. service=MultiRowMutationService 2023-07-15 13:15:58,219 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-15 13:15:58,219 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup e34116b8e67adf943377ac8cc72f7314 2023-07-15 13:15:58,220 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:58,220 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e34116b8e67adf943377ac8cc72f7314 2023-07-15 13:15:58,220 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e34116b8e67adf943377ac8cc72f7314 2023-07-15 13:15:58,221 INFO [StoreOpener-e34116b8e67adf943377ac8cc72f7314-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region e34116b8e67adf943377ac8cc72f7314 2023-07-15 13:15:58,223 DEBUG [StoreOpener-e34116b8e67adf943377ac8cc72f7314-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/rsgroup/e34116b8e67adf943377ac8cc72f7314/m 2023-07-15 13:15:58,223 DEBUG [StoreOpener-e34116b8e67adf943377ac8cc72f7314-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/rsgroup/e34116b8e67adf943377ac8cc72f7314/m 2023-07-15 13:15:58,223 INFO [StoreOpener-e34116b8e67adf943377ac8cc72f7314-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e34116b8e67adf943377ac8cc72f7314 columnFamilyName m 2023-07-15 13:15:58,223 INFO [StoreOpener-e34116b8e67adf943377ac8cc72f7314-1] regionserver.HStore(310): Store=e34116b8e67adf943377ac8cc72f7314/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:58,224 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/rsgroup/e34116b8e67adf943377ac8cc72f7314 2023-07-15 13:15:58,224 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/rsgroup/e34116b8e67adf943377ac8cc72f7314 2023-07-15 13:15:58,227 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e34116b8e67adf943377ac8cc72f7314 2023-07-15 13:15:58,232 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/rsgroup/e34116b8e67adf943377ac8cc72f7314/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:58,232 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e34116b8e67adf943377ac8cc72f7314; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@5a751f57, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:58,233 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e34116b8e67adf943377ac8cc72f7314: 2023-07-15 13:15:58,234 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314., pid=9, masterSystemTime=1689426958210 2023-07-15 13:15:58,238 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314. 2023-07-15 13:15:58,239 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314. 2023-07-15 13:15:58,239 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442. 2023-07-15 13:15:58,239 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7b7e014f227e2eaaf5fea6f62eef7442, NAME => 'hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:58,239 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=e34116b8e67adf943377ac8cc72f7314, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40619,1689426956776 2023-07-15 13:15:58,240 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689426958239"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426958239"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426958239"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426958239"}]},"ts":"1689426958239"} 2023-07-15 13:15:58,240 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 7b7e014f227e2eaaf5fea6f62eef7442 2023-07-15 13:15:58,240 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:58,240 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7b7e014f227e2eaaf5fea6f62eef7442 2023-07-15 13:15:58,240 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7b7e014f227e2eaaf5fea6f62eef7442 2023-07-15 13:15:58,243 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-15 13:15:58,243 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure e34116b8e67adf943377ac8cc72f7314, server=jenkins-hbase4.apache.org,40619,1689426956776 in 182 msec 2023-07-15 13:15:58,245 INFO [StoreOpener-7b7e014f227e2eaaf5fea6f62eef7442-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 7b7e014f227e2eaaf5fea6f62eef7442 2023-07-15 13:15:58,245 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-15 13:15:58,245 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=e34116b8e67adf943377ac8cc72f7314, ASSIGN in 192 msec 2023-07-15 13:15:58,246 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 13:15:58,247 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426958246"}]},"ts":"1689426958246"} 2023-07-15 13:15:58,247 DEBUG [StoreOpener-7b7e014f227e2eaaf5fea6f62eef7442-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/namespace/7b7e014f227e2eaaf5fea6f62eef7442/info 2023-07-15 13:15:58,247 DEBUG [StoreOpener-7b7e014f227e2eaaf5fea6f62eef7442-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/namespace/7b7e014f227e2eaaf5fea6f62eef7442/info 2023-07-15 13:15:58,248 INFO [StoreOpener-7b7e014f227e2eaaf5fea6f62eef7442-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7b7e014f227e2eaaf5fea6f62eef7442 columnFamilyName info 2023-07-15 13:15:58,249 INFO [StoreOpener-7b7e014f227e2eaaf5fea6f62eef7442-1] regionserver.HStore(310): Store=7b7e014f227e2eaaf5fea6f62eef7442/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:58,250 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/namespace/7b7e014f227e2eaaf5fea6f62eef7442 2023-07-15 13:15:58,250 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/namespace/7b7e014f227e2eaaf5fea6f62eef7442 2023-07-15 13:15:58,250 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-15 13:15:58,253 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 13:15:58,254 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 300 msec 2023-07-15 13:15:58,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7b7e014f227e2eaaf5fea6f62eef7442 2023-07-15 13:15:58,259 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/namespace/7b7e014f227e2eaaf5fea6f62eef7442/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:58,259 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7b7e014f227e2eaaf5fea6f62eef7442; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11523474880, jitterRate=0.07320722937583923}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:58,259 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7b7e014f227e2eaaf5fea6f62eef7442: 2023-07-15 13:15:58,260 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442., pid=8, masterSystemTime=1689426958210 2023-07-15 13:15:58,261 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442. 2023-07-15 13:15:58,261 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442. 2023-07-15 13:15:58,262 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=7b7e014f227e2eaaf5fea6f62eef7442, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40619,1689426956776 2023-07-15 13:15:58,262 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689426958261"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426958261"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426958261"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426958261"}]},"ts":"1689426958261"} 2023-07-15 13:15:58,264 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-15 13:15:58,264 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 7b7e014f227e2eaaf5fea6f62eef7442, server=jenkins-hbase4.apache.org,40619,1689426956776 in 205 msec 2023-07-15 13:15:58,267 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-15 13:15:58,267 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=7b7e014f227e2eaaf5fea6f62eef7442, ASSIGN in 241 msec 2023-07-15 13:15:58,268 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 13:15:58,268 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426958268"}]},"ts":"1689426958268"} 2023-07-15 13:15:58,269 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-15 13:15:58,274 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 13:15:58,274 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44709,1689426956415] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:15:58,275 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58874, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:15:58,276 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 391 msec 2023-07-15 13:15:58,279 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44709,1689426956415] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-15 13:15:58,279 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44709,1689426956415] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-15 13:15:58,283 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:58,283 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44709,1689426956415] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:58,285 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44709,1689426956415] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-15 13:15:58,285 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-15 13:15:58,286 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44709,1689426956415] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-15 13:15:58,288 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-15 13:15:58,288 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:58,293 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-15 13:15:58,302 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 13:15:58,305 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-07-15 13:15:58,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-15 13:15:58,324 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 13:15:58,328 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 13 msec 2023-07-15 13:15:58,341 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-15 13:15:58,348 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-15 13:15:58,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.216sec 2023-07-15 13:15:58,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-15 13:15:58,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:58,349 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-15 13:15:58,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-15 13:15:58,351 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:15:58,352 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 13:15:58,353 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-15 13:15:58,354 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/hbase/quota/e492a866231cfb4537c2d126f130a273 2023-07-15 13:15:58,355 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/hbase/quota/e492a866231cfb4537c2d126f130a273 empty. 2023-07-15 13:15:58,355 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/hbase/quota/e492a866231cfb4537c2d126f130a273 2023-07-15 13:15:58,355 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-15 13:15:58,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-15 13:15:58,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-15 13:15:58,360 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:58,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 13:15:58,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-15 13:15:58,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-15 13:15:58,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44709,1689426956415-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-15 13:15:58,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44709,1689426956415-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-15 13:15:58,363 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-15 13:15:58,373 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:58,375 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => e492a866231cfb4537c2d126f130a273, NAME => 'hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp 2023-07-15 13:15:58,386 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:58,386 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing e492a866231cfb4537c2d126f130a273, disabling compactions & flushes 2023-07-15 13:15:58,387 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273. 2023-07-15 13:15:58,387 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273. 2023-07-15 13:15:58,387 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273. after waiting 0 ms 2023-07-15 13:15:58,387 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273. 2023-07-15 13:15:58,387 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273. 2023-07-15 13:15:58,387 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for e492a866231cfb4537c2d126f130a273: 2023-07-15 13:15:58,389 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 13:15:58,390 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689426958390"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426958390"}]},"ts":"1689426958390"} 2023-07-15 13:15:58,391 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 13:15:58,392 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 13:15:58,392 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426958392"}]},"ts":"1689426958392"} 2023-07-15 13:15:58,393 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-15 13:15:58,397 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:58,397 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:58,397 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:58,397 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:58,397 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:58,397 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=e492a866231cfb4537c2d126f130a273, ASSIGN}] 2023-07-15 13:15:58,398 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=e492a866231cfb4537c2d126f130a273, ASSIGN 2023-07-15 13:15:58,399 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=e492a866231cfb4537c2d126f130a273, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46535,1689426956613; forceNewPlan=false, retain=false 2023-07-15 13:15:58,417 DEBUG [Listener at localhost/41271] zookeeper.ReadOnlyZKClient(139): Connect 0x6627cdb9 to 127.0.0.1:62025 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:15:58,424 DEBUG [Listener at localhost/41271] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1bda575a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:15:58,425 DEBUG [hconnection-0x750c4050-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:15:58,427 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42088, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:15:58,428 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,44709,1689426956415 2023-07-15 13:15:58,429 INFO [Listener at localhost/41271] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:15:58,431 DEBUG [Listener at localhost/41271] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-15 13:15:58,433 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46650, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-15 13:15:58,436 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-15 13:15:58,436 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:58,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-15 13:15:58,437 DEBUG [Listener at localhost/41271] zookeeper.ReadOnlyZKClient(139): Connect 0x6b3eff3d to 127.0.0.1:62025 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:15:58,442 DEBUG [Listener at localhost/41271] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c273cca, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:15:58,442 INFO [Listener at localhost/41271] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:62025 2023-07-15 13:15:58,446 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 13:15:58,447 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10169201365000a connected 2023-07-15 13:15:58,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-15 13:15:58,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-15 13:15:58,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-15 13:15:58,460 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 13:15:58,463 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 13 msec 2023-07-15 13:15:58,549 INFO [jenkins-hbase4:44709] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 13:15:58,551 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e492a866231cfb4537c2d126f130a273, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46535,1689426956613 2023-07-15 13:15:58,551 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689426958551"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426958551"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426958551"}]},"ts":"1689426958551"} 2023-07-15 13:15:58,553 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; OpenRegionProcedure e492a866231cfb4537c2d126f130a273, server=jenkins-hbase4.apache.org,46535,1689426956613}] 2023-07-15 13:15:58,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-15 13:15:58,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:58,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-15 13:15:58,564 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:15:58,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 16 2023-07-15 13:15:58,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-15 13:15:58,566 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:58,566 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-15 13:15:58,568 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 13:15:58,570 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/np1/table1/0e7ba6ff5ec1d53e8f18b7bd18bcc745 2023-07-15 13:15:58,570 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/np1/table1/0e7ba6ff5ec1d53e8f18b7bd18bcc745 empty. 2023-07-15 13:15:58,571 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/np1/table1/0e7ba6ff5ec1d53e8f18b7bd18bcc745 2023-07-15 13:15:58,571 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-15 13:15:58,583 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-15 13:15:58,584 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0e7ba6ff5ec1d53e8f18b7bd18bcc745, NAME => 'np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp 2023-07-15 13:15:58,599 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:58,599 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 0e7ba6ff5ec1d53e8f18b7bd18bcc745, disabling compactions & flushes 2023-07-15 13:15:58,599 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745. 2023-07-15 13:15:58,599 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745. 2023-07-15 13:15:58,599 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745. after waiting 0 ms 2023-07-15 13:15:58,599 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745. 2023-07-15 13:15:58,599 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745. 2023-07-15 13:15:58,599 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 0e7ba6ff5ec1d53e8f18b7bd18bcc745: 2023-07-15 13:15:58,602 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 13:15:58,602 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689426958602"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426958602"}]},"ts":"1689426958602"} 2023-07-15 13:15:58,604 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 13:15:58,604 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 13:15:58,605 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426958604"}]},"ts":"1689426958604"} 2023-07-15 13:15:58,606 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-15 13:15:58,610 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:15:58,611 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:15:58,611 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:15:58,611 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:15:58,611 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:15:58,611 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=0e7ba6ff5ec1d53e8f18b7bd18bcc745, ASSIGN}] 2023-07-15 13:15:58,612 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=0e7ba6ff5ec1d53e8f18b7bd18bcc745, ASSIGN 2023-07-15 13:15:58,612 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=0e7ba6ff5ec1d53e8f18b7bd18bcc745, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46535,1689426956613; forceNewPlan=false, retain=false 2023-07-15 13:15:58,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-15 13:15:58,708 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273. 2023-07-15 13:15:58,708 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e492a866231cfb4537c2d126f130a273, NAME => 'hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:58,708 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota e492a866231cfb4537c2d126f130a273 2023-07-15 13:15:58,708 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:58,709 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e492a866231cfb4537c2d126f130a273 2023-07-15 13:15:58,709 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e492a866231cfb4537c2d126f130a273 2023-07-15 13:15:58,710 INFO [StoreOpener-e492a866231cfb4537c2d126f130a273-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region e492a866231cfb4537c2d126f130a273 2023-07-15 13:15:58,711 DEBUG [StoreOpener-e492a866231cfb4537c2d126f130a273-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/quota/e492a866231cfb4537c2d126f130a273/q 2023-07-15 13:15:58,711 DEBUG [StoreOpener-e492a866231cfb4537c2d126f130a273-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/quota/e492a866231cfb4537c2d126f130a273/q 2023-07-15 13:15:58,712 INFO [StoreOpener-e492a866231cfb4537c2d126f130a273-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e492a866231cfb4537c2d126f130a273 columnFamilyName q 2023-07-15 13:15:58,712 INFO [StoreOpener-e492a866231cfb4537c2d126f130a273-1] regionserver.HStore(310): Store=e492a866231cfb4537c2d126f130a273/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:58,712 INFO [StoreOpener-e492a866231cfb4537c2d126f130a273-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region e492a866231cfb4537c2d126f130a273 2023-07-15 13:15:58,713 DEBUG [StoreOpener-e492a866231cfb4537c2d126f130a273-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/quota/e492a866231cfb4537c2d126f130a273/u 2023-07-15 13:15:58,713 DEBUG [StoreOpener-e492a866231cfb4537c2d126f130a273-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/quota/e492a866231cfb4537c2d126f130a273/u 2023-07-15 13:15:58,714 INFO [StoreOpener-e492a866231cfb4537c2d126f130a273-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e492a866231cfb4537c2d126f130a273 columnFamilyName u 2023-07-15 13:15:58,714 INFO [StoreOpener-e492a866231cfb4537c2d126f130a273-1] regionserver.HStore(310): Store=e492a866231cfb4537c2d126f130a273/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:58,715 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/quota/e492a866231cfb4537c2d126f130a273 2023-07-15 13:15:58,715 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/quota/e492a866231cfb4537c2d126f130a273 2023-07-15 13:15:58,717 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-15 13:15:58,718 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e492a866231cfb4537c2d126f130a273 2023-07-15 13:15:58,719 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/quota/e492a866231cfb4537c2d126f130a273/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:58,720 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e492a866231cfb4537c2d126f130a273; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10670804640, jitterRate=-0.006203874945640564}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-15 13:15:58,720 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e492a866231cfb4537c2d126f130a273: 2023-07-15 13:15:58,721 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273., pid=15, masterSystemTime=1689426958704 2023-07-15 13:15:58,722 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273. 2023-07-15 13:15:58,722 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273. 2023-07-15 13:15:58,722 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e492a866231cfb4537c2d126f130a273, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46535,1689426956613 2023-07-15 13:15:58,723 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689426958722"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426958722"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426958722"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426958722"}]},"ts":"1689426958722"} 2023-07-15 13:15:58,725 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-15 13:15:58,725 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; OpenRegionProcedure e492a866231cfb4537c2d126f130a273, server=jenkins-hbase4.apache.org,46535,1689426956613 in 171 msec 2023-07-15 13:15:58,727 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-15 13:15:58,727 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=e492a866231cfb4537c2d126f130a273, ASSIGN in 328 msec 2023-07-15 13:15:58,728 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 13:15:58,728 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426958728"}]},"ts":"1689426958728"} 2023-07-15 13:15:58,729 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-15 13:15:58,731 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 13:15:58,732 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 382 msec 2023-07-15 13:15:58,763 INFO [jenkins-hbase4:44709] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 13:15:58,764 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=0e7ba6ff5ec1d53e8f18b7bd18bcc745, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46535,1689426956613 2023-07-15 13:15:58,764 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689426958764"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426958764"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426958764"}]},"ts":"1689426958764"} 2023-07-15 13:15:58,766 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 0e7ba6ff5ec1d53e8f18b7bd18bcc745, server=jenkins-hbase4.apache.org,46535,1689426956613}] 2023-07-15 13:15:58,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-15 13:15:58,921 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745. 2023-07-15 13:15:58,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0e7ba6ff5ec1d53e8f18b7bd18bcc745, NAME => 'np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:15:58,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 0e7ba6ff5ec1d53e8f18b7bd18bcc745 2023-07-15 13:15:58,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:15:58,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0e7ba6ff5ec1d53e8f18b7bd18bcc745 2023-07-15 13:15:58,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0e7ba6ff5ec1d53e8f18b7bd18bcc745 2023-07-15 13:15:58,923 INFO [StoreOpener-0e7ba6ff5ec1d53e8f18b7bd18bcc745-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 0e7ba6ff5ec1d53e8f18b7bd18bcc745 2023-07-15 13:15:58,924 DEBUG [StoreOpener-0e7ba6ff5ec1d53e8f18b7bd18bcc745-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/np1/table1/0e7ba6ff5ec1d53e8f18b7bd18bcc745/fam1 2023-07-15 13:15:58,924 DEBUG [StoreOpener-0e7ba6ff5ec1d53e8f18b7bd18bcc745-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/np1/table1/0e7ba6ff5ec1d53e8f18b7bd18bcc745/fam1 2023-07-15 13:15:58,925 INFO [StoreOpener-0e7ba6ff5ec1d53e8f18b7bd18bcc745-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0e7ba6ff5ec1d53e8f18b7bd18bcc745 columnFamilyName fam1 2023-07-15 13:15:58,925 INFO [StoreOpener-0e7ba6ff5ec1d53e8f18b7bd18bcc745-1] regionserver.HStore(310): Store=0e7ba6ff5ec1d53e8f18b7bd18bcc745/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:15:58,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/np1/table1/0e7ba6ff5ec1d53e8f18b7bd18bcc745 2023-07-15 13:15:58,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/np1/table1/0e7ba6ff5ec1d53e8f18b7bd18bcc745 2023-07-15 13:15:58,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0e7ba6ff5ec1d53e8f18b7bd18bcc745 2023-07-15 13:15:58,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/np1/table1/0e7ba6ff5ec1d53e8f18b7bd18bcc745/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:15:58,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0e7ba6ff5ec1d53e8f18b7bd18bcc745; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11279312160, jitterRate=0.050467804074287415}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:15:58,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0e7ba6ff5ec1d53e8f18b7bd18bcc745: 2023-07-15 13:15:58,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745., pid=18, masterSystemTime=1689426958917 2023-07-15 13:15:58,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745. 2023-07-15 13:15:58,933 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745. 2023-07-15 13:15:58,933 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=0e7ba6ff5ec1d53e8f18b7bd18bcc745, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46535,1689426956613 2023-07-15 13:15:58,933 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689426958933"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426958933"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426958933"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426958933"}]},"ts":"1689426958933"} 2023-07-15 13:15:58,937 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-15 13:15:58,937 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 0e7ba6ff5ec1d53e8f18b7bd18bcc745, server=jenkins-hbase4.apache.org,46535,1689426956613 in 168 msec 2023-07-15 13:15:58,938 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-15 13:15:58,938 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=0e7ba6ff5ec1d53e8f18b7bd18bcc745, ASSIGN in 326 msec 2023-07-15 13:15:58,939 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 13:15:58,939 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426958939"}]},"ts":"1689426958939"} 2023-07-15 13:15:58,940 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-15 13:15:58,942 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 13:15:58,943 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; CreateTableProcedure table=np1:table1 in 381 msec 2023-07-15 13:15:59,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-15 13:15:59,168 INFO [Listener at localhost/41271] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 16 completed 2023-07-15 13:15:59,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:15:59,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-15 13:15:59,172 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:15:59,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-15 13:15:59,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-15 13:15:59,191 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=21 msec 2023-07-15 13:15:59,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-15 13:15:59,276 INFO [Listener at localhost/41271] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-15 13:15:59,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:15:59,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:15:59,278 INFO [Listener at localhost/41271] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-15 13:15:59,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-15 13:15:59,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-15 13:15:59,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-15 13:15:59,281 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426959281"}]},"ts":"1689426959281"} 2023-07-15 13:15:59,283 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-15 13:15:59,284 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-15 13:15:59,285 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=0e7ba6ff5ec1d53e8f18b7bd18bcc745, UNASSIGN}] 2023-07-15 13:15:59,285 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=0e7ba6ff5ec1d53e8f18b7bd18bcc745, UNASSIGN 2023-07-15 13:15:59,286 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=0e7ba6ff5ec1d53e8f18b7bd18bcc745, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46535,1689426956613 2023-07-15 13:15:59,286 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689426959286"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426959286"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426959286"}]},"ts":"1689426959286"} 2023-07-15 13:15:59,287 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 0e7ba6ff5ec1d53e8f18b7bd18bcc745, server=jenkins-hbase4.apache.org,46535,1689426956613}] 2023-07-15 13:15:59,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-15 13:15:59,439 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0e7ba6ff5ec1d53e8f18b7bd18bcc745 2023-07-15 13:15:59,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0e7ba6ff5ec1d53e8f18b7bd18bcc745, disabling compactions & flushes 2023-07-15 13:15:59,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745. 2023-07-15 13:15:59,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745. 2023-07-15 13:15:59,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745. after waiting 0 ms 2023-07-15 13:15:59,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745. 2023-07-15 13:15:59,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/np1/table1/0e7ba6ff5ec1d53e8f18b7bd18bcc745/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:59,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745. 2023-07-15 13:15:59,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0e7ba6ff5ec1d53e8f18b7bd18bcc745: 2023-07-15 13:15:59,447 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0e7ba6ff5ec1d53e8f18b7bd18bcc745 2023-07-15 13:15:59,448 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=0e7ba6ff5ec1d53e8f18b7bd18bcc745, regionState=CLOSED 2023-07-15 13:15:59,448 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689426959448"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426959448"}]},"ts":"1689426959448"} 2023-07-15 13:15:59,450 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-15 13:15:59,450 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 0e7ba6ff5ec1d53e8f18b7bd18bcc745, server=jenkins-hbase4.apache.org,46535,1689426956613 in 162 msec 2023-07-15 13:15:59,451 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-15 13:15:59,452 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=0e7ba6ff5ec1d53e8f18b7bd18bcc745, UNASSIGN in 165 msec 2023-07-15 13:15:59,452 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426959452"}]},"ts":"1689426959452"} 2023-07-15 13:15:59,453 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-15 13:15:59,456 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-15 13:15:59,457 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 177 msec 2023-07-15 13:15:59,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-15 13:15:59,583 INFO [Listener at localhost/41271] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-15 13:15:59,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-15 13:15:59,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-15 13:15:59,586 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-15 13:15:59,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-15 13:15:59,587 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-15 13:15:59,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:15:59,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-15 13:15:59,591 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/np1/table1/0e7ba6ff5ec1d53e8f18b7bd18bcc745 2023-07-15 13:15:59,592 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/np1/table1/0e7ba6ff5ec1d53e8f18b7bd18bcc745/fam1, FileablePath, hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/np1/table1/0e7ba6ff5ec1d53e8f18b7bd18bcc745/recovered.edits] 2023-07-15 13:15:59,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-15 13:15:59,597 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/np1/table1/0e7ba6ff5ec1d53e8f18b7bd18bcc745/recovered.edits/4.seqid to hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/archive/data/np1/table1/0e7ba6ff5ec1d53e8f18b7bd18bcc745/recovered.edits/4.seqid 2023-07-15 13:15:59,597 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/.tmp/data/np1/table1/0e7ba6ff5ec1d53e8f18b7bd18bcc745 2023-07-15 13:15:59,598 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-15 13:15:59,599 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-15 13:15:59,601 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-15 13:15:59,603 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-15 13:15:59,604 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-15 13:15:59,604 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-15 13:15:59,604 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426959604"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:59,606 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-15 13:15:59,606 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 0e7ba6ff5ec1d53e8f18b7bd18bcc745, NAME => 'np1:table1,,1689426958561.0e7ba6ff5ec1d53e8f18b7bd18bcc745.', STARTKEY => '', ENDKEY => ''}] 2023-07-15 13:15:59,606 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-15 13:15:59,606 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689426959606"}]},"ts":"9223372036854775807"} 2023-07-15 13:15:59,607 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-15 13:15:59,611 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-15 13:15:59,612 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 27 msec 2023-07-15 13:15:59,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-15 13:15:59,694 INFO [Listener at localhost/41271] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-15 13:15:59,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-15 13:15:59,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-15 13:15:59,706 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-15 13:15:59,709 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-15 13:15:59,711 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-15 13:15:59,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-15 13:15:59,712 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-15 13:15:59,712 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 13:15:59,713 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-15 13:15:59,715 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-15 13:15:59,716 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 17 msec 2023-07-15 13:15:59,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44709] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-15 13:15:59,813 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-15 13:15:59,813 INFO [Listener at localhost/41271] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-15 13:15:59,813 DEBUG [Listener at localhost/41271] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6627cdb9 to 127.0.0.1:62025 2023-07-15 13:15:59,814 DEBUG [Listener at localhost/41271] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:15:59,814 DEBUG [Listener at localhost/41271] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-15 13:15:59,814 DEBUG [Listener at localhost/41271] util.JVMClusterUtil(257): Found active master hash=1607176717, stopped=false 2023-07-15 13:15:59,814 DEBUG [Listener at localhost/41271] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-15 13:15:59,814 DEBUG [Listener at localhost/41271] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-15 13:15:59,814 DEBUG [Listener at localhost/41271] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-15 13:15:59,814 INFO [Listener at localhost/41271] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,44709,1689426956415 2023-07-15 13:15:59,817 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:40619-0x101692013650002, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 13:15:59,817 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:35105-0x101692013650003, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 13:15:59,817 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 13:15:59,817 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:15:59,817 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 13:15:59,817 INFO [Listener at localhost/41271] procedure2.ProcedureExecutor(629): Stopping 2023-07-15 13:15:59,818 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40619-0x101692013650002, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:15:59,819 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:15:59,819 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35105-0x101692013650003, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:15:59,819 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:15:59,819 DEBUG [Listener at localhost/41271] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3ac278a8 to 127.0.0.1:62025 2023-07-15 13:15:59,819 DEBUG [Listener at localhost/41271] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:15:59,819 INFO [Listener at localhost/41271] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46535,1689426956613' ***** 2023-07-15 13:15:59,819 INFO [Listener at localhost/41271] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 13:15:59,819 INFO [Listener at localhost/41271] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40619,1689426956776' ***** 2023-07-15 13:15:59,819 INFO [Listener at localhost/41271] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 13:15:59,819 INFO [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 13:15:59,820 INFO [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 13:15:59,820 INFO [Listener at localhost/41271] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35105,1689426956929' ***** 2023-07-15 13:15:59,820 INFO [Listener at localhost/41271] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 13:15:59,823 INFO [RS:2;jenkins-hbase4:35105] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 13:15:59,823 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 13:15:59,829 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:15:59,832 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:15:59,833 INFO [RS:2;jenkins-hbase4:35105] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@11e3dcbb{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:15:59,833 INFO [RS:0;jenkins-hbase4:46535] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@43a76ec8{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:15:59,833 INFO [RS:1;jenkins-hbase4:40619] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@529e56fb{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:15:59,834 INFO [RS:2;jenkins-hbase4:35105] server.AbstractConnector(383): Stopped ServerConnector@25f74100{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 13:15:59,834 INFO [RS:0;jenkins-hbase4:46535] server.AbstractConnector(383): Stopped ServerConnector@71cf3187{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 13:15:59,834 INFO [RS:1;jenkins-hbase4:40619] server.AbstractConnector(383): Stopped ServerConnector@78bb3eeb{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 13:15:59,834 INFO [RS:2;jenkins-hbase4:35105] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 13:15:59,834 INFO [RS:1;jenkins-hbase4:40619] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 13:15:59,834 INFO [RS:0;jenkins-hbase4:46535] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 13:15:59,835 INFO [RS:2;jenkins-hbase4:35105] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7596ce29{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-15 13:15:59,837 INFO [RS:1;jenkins-hbase4:40619] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2f107ff7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-15 13:15:59,837 INFO [RS:0;jenkins-hbase4:46535] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@58399b0d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-15 13:15:59,837 INFO [RS:2;jenkins-hbase4:35105] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@31f6ff8a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/hadoop.log.dir/,STOPPED} 2023-07-15 13:15:59,837 INFO [RS:0;jenkins-hbase4:46535] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@150bbc40{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/hadoop.log.dir/,STOPPED} 2023-07-15 13:15:59,837 INFO [RS:1;jenkins-hbase4:40619] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@622bd715{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/hadoop.log.dir/,STOPPED} 2023-07-15 13:15:59,837 INFO [RS:1;jenkins-hbase4:40619] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 13:15:59,838 INFO [RS:1;jenkins-hbase4:40619] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 13:15:59,838 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 13:15:59,838 INFO [RS:1;jenkins-hbase4:40619] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 13:15:59,838 INFO [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer(3305): Received CLOSE for 7b7e014f227e2eaaf5fea6f62eef7442 2023-07-15 13:15:59,839 INFO [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer(3305): Received CLOSE for e34116b8e67adf943377ac8cc72f7314 2023-07-15 13:15:59,839 INFO [RS:2;jenkins-hbase4:35105] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 13:15:59,839 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7b7e014f227e2eaaf5fea6f62eef7442, disabling compactions & flushes 2023-07-15 13:15:59,839 INFO [RS:2;jenkins-hbase4:35105] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 13:15:59,839 INFO [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40619,1689426956776 2023-07-15 13:15:59,839 INFO [RS:2;jenkins-hbase4:35105] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 13:15:59,839 DEBUG [RS:1;jenkins-hbase4:40619] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7200cef4 to 127.0.0.1:62025 2023-07-15 13:15:59,839 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442. 2023-07-15 13:15:59,839 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442. 2023-07-15 13:15:59,839 INFO [RS:0;jenkins-hbase4:46535] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 13:15:59,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442. after waiting 0 ms 2023-07-15 13:15:59,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442. 2023-07-15 13:15:59,840 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 13:15:59,840 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 7b7e014f227e2eaaf5fea6f62eef7442 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-15 13:15:59,839 DEBUG [RS:1;jenkins-hbase4:40619] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:15:59,839 INFO [RS:2;jenkins-hbase4:35105] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35105,1689426956929 2023-07-15 13:15:59,841 INFO [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-15 13:15:59,841 DEBUG [RS:2;jenkins-hbase4:35105] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x702a8227 to 127.0.0.1:62025 2023-07-15 13:15:59,840 INFO [RS:0;jenkins-hbase4:46535] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 13:15:59,841 DEBUG [RS:2;jenkins-hbase4:35105] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:15:59,841 INFO [RS:2;jenkins-hbase4:35105] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35105,1689426956929; all regions closed. 2023-07-15 13:15:59,841 DEBUG [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer(1478): Online Regions={7b7e014f227e2eaaf5fea6f62eef7442=hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442., e34116b8e67adf943377ac8cc72f7314=hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314.} 2023-07-15 13:15:59,841 DEBUG [RS:2;jenkins-hbase4:35105] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-15 13:15:59,841 DEBUG [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer(1504): Waiting on 7b7e014f227e2eaaf5fea6f62eef7442, e34116b8e67adf943377ac8cc72f7314 2023-07-15 13:15:59,841 INFO [RS:0;jenkins-hbase4:46535] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 13:15:59,842 INFO [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer(3305): Received CLOSE for e492a866231cfb4537c2d126f130a273 2023-07-15 13:15:59,842 INFO [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46535,1689426956613 2023-07-15 13:15:59,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e492a866231cfb4537c2d126f130a273, disabling compactions & flushes 2023-07-15 13:15:59,842 DEBUG [RS:0;jenkins-hbase4:46535] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x32f9b515 to 127.0.0.1:62025 2023-07-15 13:15:59,842 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273. 2023-07-15 13:15:59,842 DEBUG [RS:0;jenkins-hbase4:46535] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:15:59,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273. 2023-07-15 13:15:59,842 INFO [RS:0;jenkins-hbase4:46535] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 13:15:59,842 INFO [RS:0;jenkins-hbase4:46535] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 13:15:59,842 INFO [RS:0;jenkins-hbase4:46535] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 13:15:59,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273. after waiting 0 ms 2023-07-15 13:15:59,842 INFO [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-15 13:15:59,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273. 2023-07-15 13:15:59,844 INFO [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-15 13:15:59,844 DEBUG [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer(1478): Online Regions={e492a866231cfb4537c2d126f130a273=hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273., 1588230740=hbase:meta,,1.1588230740} 2023-07-15 13:15:59,844 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-15 13:15:59,844 DEBUG [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer(1504): Waiting on 1588230740, e492a866231cfb4537c2d126f130a273 2023-07-15 13:15:59,844 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-15 13:15:59,844 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-15 13:15:59,844 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-15 13:15:59,844 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-15 13:15:59,844 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-15 13:15:59,848 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:15:59,853 DEBUG [RS:2;jenkins-hbase4:35105] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/oldWALs 2023-07-15 13:15:59,853 INFO [RS:2;jenkins-hbase4:35105] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35105%2C1689426956929:(num 1689426957573) 2023-07-15 13:15:59,853 DEBUG [RS:2;jenkins-hbase4:35105] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:15:59,853 INFO [RS:2;jenkins-hbase4:35105] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:15:59,853 INFO [RS:2;jenkins-hbase4:35105] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-15 13:15:59,853 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/quota/e492a866231cfb4537c2d126f130a273/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:15:59,853 INFO [RS:2;jenkins-hbase4:35105] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 13:15:59,853 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 13:15:59,854 INFO [RS:2;jenkins-hbase4:35105] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 13:15:59,854 INFO [RS:2;jenkins-hbase4:35105] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 13:15:59,855 INFO [RS:2;jenkins-hbase4:35105] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35105 2023-07-15 13:15:59,855 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273. 2023-07-15 13:15:59,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e492a866231cfb4537c2d126f130a273: 2023-07-15 13:15:59,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689426958348.e492a866231cfb4537c2d126f130a273. 2023-07-15 13:15:59,877 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/namespace/7b7e014f227e2eaaf5fea6f62eef7442/.tmp/info/5cff3c48da1040f1a24dbd3041c7f967 2023-07-15 13:15:59,877 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/.tmp/info/96a8b5c657704a798f49f4b6368fb038 2023-07-15 13:15:59,885 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5cff3c48da1040f1a24dbd3041c7f967 2023-07-15 13:15:59,885 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 96a8b5c657704a798f49f4b6368fb038 2023-07-15 13:15:59,886 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/namespace/7b7e014f227e2eaaf5fea6f62eef7442/.tmp/info/5cff3c48da1040f1a24dbd3041c7f967 as hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/namespace/7b7e014f227e2eaaf5fea6f62eef7442/info/5cff3c48da1040f1a24dbd3041c7f967 2023-07-15 13:15:59,893 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5cff3c48da1040f1a24dbd3041c7f967 2023-07-15 13:15:59,894 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/namespace/7b7e014f227e2eaaf5fea6f62eef7442/info/5cff3c48da1040f1a24dbd3041c7f967, entries=3, sequenceid=8, filesize=5.0 K 2023-07-15 13:15:59,896 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 7b7e014f227e2eaaf5fea6f62eef7442 in 56ms, sequenceid=8, compaction requested=false 2023-07-15 13:15:59,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-15 13:15:59,910 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/.tmp/rep_barrier/f059bc60e6da4973bd22c01677177699 2023-07-15 13:15:59,915 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/namespace/7b7e014f227e2eaaf5fea6f62eef7442/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-15 13:15:59,915 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442. 2023-07-15 13:15:59,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7b7e014f227e2eaaf5fea6f62eef7442: 2023-07-15 13:15:59,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689426957883.7b7e014f227e2eaaf5fea6f62eef7442. 2023-07-15 13:15:59,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e34116b8e67adf943377ac8cc72f7314, disabling compactions & flushes 2023-07-15 13:15:59,916 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314. 2023-07-15 13:15:59,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314. 2023-07-15 13:15:59,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314. after waiting 0 ms 2023-07-15 13:15:59,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314. 2023-07-15 13:15:59,916 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e34116b8e67adf943377ac8cc72f7314 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-15 13:15:59,918 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f059bc60e6da4973bd22c01677177699 2023-07-15 13:15:59,942 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/.tmp/table/f65796c1f9b0476392359941f92443da 2023-07-15 13:15:59,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/rsgroup/e34116b8e67adf943377ac8cc72f7314/.tmp/m/1300ff2997a74f9e833df6a12ec82a23 2023-07-15 13:15:59,949 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:35105-0x101692013650003, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35105,1689426956929 2023-07-15 13:15:59,949 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:35105-0x101692013650003, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:59,949 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:59,949 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f65796c1f9b0476392359941f92443da 2023-07-15 13:15:59,950 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35105,1689426956929 2023-07-15 13:15:59,950 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:59,950 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:40619-0x101692013650002, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35105,1689426956929 2023-07-15 13:15:59,950 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:40619-0x101692013650002, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:15:59,950 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35105,1689426956929] 2023-07-15 13:15:59,950 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35105,1689426956929; numProcessing=1 2023-07-15 13:15:59,951 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/.tmp/info/96a8b5c657704a798f49f4b6368fb038 as hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/info/96a8b5c657704a798f49f4b6368fb038 2023-07-15 13:15:59,952 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35105,1689426956929 already deleted, retry=false 2023-07-15 13:15:59,953 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35105,1689426956929 expired; onlineServers=2 2023-07-15 13:15:59,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/rsgroup/e34116b8e67adf943377ac8cc72f7314/.tmp/m/1300ff2997a74f9e833df6a12ec82a23 as hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/rsgroup/e34116b8e67adf943377ac8cc72f7314/m/1300ff2997a74f9e833df6a12ec82a23 2023-07-15 13:15:59,959 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 96a8b5c657704a798f49f4b6368fb038 2023-07-15 13:15:59,959 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/info/96a8b5c657704a798f49f4b6368fb038, entries=32, sequenceid=31, filesize=8.5 K 2023-07-15 13:15:59,960 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/.tmp/rep_barrier/f059bc60e6da4973bd22c01677177699 as hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/rep_barrier/f059bc60e6da4973bd22c01677177699 2023-07-15 13:15:59,963 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/rsgroup/e34116b8e67adf943377ac8cc72f7314/m/1300ff2997a74f9e833df6a12ec82a23, entries=1, sequenceid=7, filesize=4.9 K 2023-07-15 13:15:59,964 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for e34116b8e67adf943377ac8cc72f7314 in 48ms, sequenceid=7, compaction requested=false 2023-07-15 13:15:59,964 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-15 13:15:59,969 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f059bc60e6da4973bd22c01677177699 2023-07-15 13:15:59,970 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/rep_barrier/f059bc60e6da4973bd22c01677177699, entries=1, sequenceid=31, filesize=4.9 K 2023-07-15 13:15:59,970 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/.tmp/table/f65796c1f9b0476392359941f92443da as hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/table/f65796c1f9b0476392359941f92443da 2023-07-15 13:15:59,971 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/rsgroup/e34116b8e67adf943377ac8cc72f7314/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-15 13:15:59,972 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 13:15:59,972 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314. 2023-07-15 13:15:59,972 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e34116b8e67adf943377ac8cc72f7314: 2023-07-15 13:15:59,972 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689426957953.e34116b8e67adf943377ac8cc72f7314. 2023-07-15 13:15:59,977 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f65796c1f9b0476392359941f92443da 2023-07-15 13:15:59,977 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/table/f65796c1f9b0476392359941f92443da, entries=8, sequenceid=31, filesize=5.2 K 2023-07-15 13:15:59,978 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 134ms, sequenceid=31, compaction requested=false 2023-07-15 13:15:59,989 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-15 13:15:59,990 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 13:15:59,990 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-15 13:15:59,990 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-15 13:15:59,990 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-15 13:16:00,042 INFO [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40619,1689426956776; all regions closed. 2023-07-15 13:16:00,042 DEBUG [RS:1;jenkins-hbase4:40619] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-15 13:16:00,044 INFO [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46535,1689426956613; all regions closed. 2023-07-15 13:16:00,044 DEBUG [RS:0;jenkins-hbase4:46535] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-15 13:16:00,051 DEBUG [RS:1;jenkins-hbase4:40619] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/oldWALs 2023-07-15 13:16:00,051 INFO [RS:1;jenkins-hbase4:40619] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40619%2C1689426956776:(num 1689426957575) 2023-07-15 13:16:00,051 DEBUG [RS:1;jenkins-hbase4:40619] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:16:00,051 INFO [RS:1;jenkins-hbase4:40619] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:16:00,051 INFO [RS:2;jenkins-hbase4:35105] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35105,1689426956929; zookeeper connection closed. 2023-07-15 13:16:00,051 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:35105-0x101692013650003, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:00,052 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:35105-0x101692013650003, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:00,052 DEBUG [RS:0;jenkins-hbase4:46535] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/oldWALs 2023-07-15 13:16:00,051 INFO [RS:1;jenkins-hbase4:40619] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-15 13:16:00,052 INFO [RS:0;jenkins-hbase4:46535] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46535%2C1689426956613.meta:.meta(num 1689426957810) 2023-07-15 13:16:00,053 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 13:16:00,053 INFO [RS:1;jenkins-hbase4:40619] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 13:16:00,053 INFO [RS:1;jenkins-hbase4:40619] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 13:16:00,053 INFO [RS:1;jenkins-hbase4:40619] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 13:16:00,053 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7e3c5d33] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7e3c5d33 2023-07-15 13:16:00,055 INFO [RS:1;jenkins-hbase4:40619] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40619 2023-07-15 13:16:00,059 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40619,1689426956776 2023-07-15 13:16:00,059 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:16:00,059 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:40619-0x101692013650002, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40619,1689426956776 2023-07-15 13:16:00,061 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40619,1689426956776] 2023-07-15 13:16:00,061 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40619,1689426956776; numProcessing=2 2023-07-15 13:16:00,063 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40619,1689426956776 already deleted, retry=false 2023-07-15 13:16:00,063 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40619,1689426956776 expired; onlineServers=1 2023-07-15 13:16:00,063 DEBUG [RS:0;jenkins-hbase4:46535] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/oldWALs 2023-07-15 13:16:00,063 INFO [RS:0;jenkins-hbase4:46535] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46535%2C1689426956613:(num 1689426957573) 2023-07-15 13:16:00,063 DEBUG [RS:0;jenkins-hbase4:46535] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:16:00,064 INFO [RS:0;jenkins-hbase4:46535] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:16:00,064 INFO [RS:0;jenkins-hbase4:46535] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-15 13:16:00,064 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 13:16:00,065 INFO [RS:0;jenkins-hbase4:46535] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46535 2023-07-15 13:16:00,069 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46535,1689426956613 2023-07-15 13:16:00,069 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:16:00,070 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46535,1689426956613] 2023-07-15 13:16:00,070 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46535,1689426956613; numProcessing=3 2023-07-15 13:16:00,071 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46535,1689426956613 already deleted, retry=false 2023-07-15 13:16:00,071 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46535,1689426956613 expired; onlineServers=0 2023-07-15 13:16:00,071 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44709,1689426956415' ***** 2023-07-15 13:16:00,071 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-15 13:16:00,071 DEBUG [M:0;jenkins-hbase4:44709] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5bf41d2c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 13:16:00,071 INFO [M:0;jenkins-hbase4:44709] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 13:16:00,073 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-15 13:16:00,073 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:16:00,073 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 13:16:00,073 INFO [M:0;jenkins-hbase4:44709] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@41313141{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-15 13:16:00,074 INFO [M:0;jenkins-hbase4:44709] server.AbstractConnector(383): Stopped ServerConnector@1e331607{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 13:16:00,074 INFO [M:0;jenkins-hbase4:44709] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 13:16:00,074 INFO [M:0;jenkins-hbase4:44709] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@78e67007{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-15 13:16:00,074 INFO [M:0;jenkins-hbase4:44709] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@78412264{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/hadoop.log.dir/,STOPPED} 2023-07-15 13:16:00,074 INFO [M:0;jenkins-hbase4:44709] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44709,1689426956415 2023-07-15 13:16:00,074 INFO [M:0;jenkins-hbase4:44709] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44709,1689426956415; all regions closed. 2023-07-15 13:16:00,074 DEBUG [M:0;jenkins-hbase4:44709] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:16:00,074 INFO [M:0;jenkins-hbase4:44709] master.HMaster(1491): Stopping master jetty server 2023-07-15 13:16:00,075 INFO [M:0;jenkins-hbase4:44709] server.AbstractConnector(383): Stopped ServerConnector@736543ff{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 13:16:00,075 DEBUG [M:0;jenkins-hbase4:44709] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-15 13:16:00,075 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-15 13:16:00,076 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689426957328] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689426957328,5,FailOnTimeoutGroup] 2023-07-15 13:16:00,076 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689426957329] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689426957329,5,FailOnTimeoutGroup] 2023-07-15 13:16:00,076 DEBUG [M:0;jenkins-hbase4:44709] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-15 13:16:00,077 INFO [M:0;jenkins-hbase4:44709] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-15 13:16:00,077 INFO [M:0;jenkins-hbase4:44709] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-15 13:16:00,077 INFO [M:0;jenkins-hbase4:44709] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-15 13:16:00,077 DEBUG [M:0;jenkins-hbase4:44709] master.HMaster(1512): Stopping service threads 2023-07-15 13:16:00,077 INFO [M:0;jenkins-hbase4:44709] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-15 13:16:00,078 ERROR [M:0;jenkins-hbase4:44709] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-15 13:16:00,078 INFO [M:0;jenkins-hbase4:44709] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-15 13:16:00,078 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-15 13:16:00,078 DEBUG [M:0;jenkins-hbase4:44709] zookeeper.ZKUtil(398): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-15 13:16:00,078 WARN [M:0;jenkins-hbase4:44709] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-15 13:16:00,078 INFO [M:0;jenkins-hbase4:44709] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-15 13:16:00,079 INFO [M:0;jenkins-hbase4:44709] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-15 13:16:00,079 DEBUG [M:0;jenkins-hbase4:44709] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-15 13:16:00,079 INFO [M:0;jenkins-hbase4:44709] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:16:00,079 DEBUG [M:0;jenkins-hbase4:44709] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:16:00,079 DEBUG [M:0;jenkins-hbase4:44709] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-15 13:16:00,079 DEBUG [M:0;jenkins-hbase4:44709] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:16:00,079 INFO [M:0;jenkins-hbase4:44709] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.98 KB heapSize=109.13 KB 2023-07-15 13:16:00,090 INFO [M:0;jenkins-hbase4:44709] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.98 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/52807969fc314c528567e30d9063766c 2023-07-15 13:16:00,095 DEBUG [M:0;jenkins-hbase4:44709] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/52807969fc314c528567e30d9063766c as hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/52807969fc314c528567e30d9063766c 2023-07-15 13:16:00,099 INFO [M:0;jenkins-hbase4:44709] regionserver.HStore(1080): Added hdfs://localhost:34061/user/jenkins/test-data/0480d959-c18e-3482-17ce-a0f13fe7ac69/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/52807969fc314c528567e30d9063766c, entries=24, sequenceid=194, filesize=12.4 K 2023-07-15 13:16:00,099 INFO [M:0;jenkins-hbase4:44709] regionserver.HRegion(2948): Finished flush of dataSize ~92.98 KB/95208, heapSize ~109.11 KB/111728, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 20ms, sequenceid=194, compaction requested=false 2023-07-15 13:16:00,101 INFO [M:0;jenkins-hbase4:44709] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:16:00,101 DEBUG [M:0;jenkins-hbase4:44709] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 13:16:00,105 INFO [M:0;jenkins-hbase4:44709] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-15 13:16:00,105 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 13:16:00,106 INFO [M:0;jenkins-hbase4:44709] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44709 2023-07-15 13:16:00,108 DEBUG [M:0;jenkins-hbase4:44709] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,44709,1689426956415 already deleted, retry=false 2023-07-15 13:16:00,319 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:00,319 INFO [M:0;jenkins-hbase4:44709] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44709,1689426956415; zookeeper connection closed. 2023-07-15 13:16:00,319 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): master:44709-0x101692013650000, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:00,419 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:00,419 INFO [RS:0;jenkins-hbase4:46535] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46535,1689426956613; zookeeper connection closed. 2023-07-15 13:16:00,419 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:46535-0x101692013650001, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:00,420 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@d9d43ef] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@d9d43ef 2023-07-15 13:16:00,519 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:40619-0x101692013650002, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:00,519 INFO [RS:1;jenkins-hbase4:40619] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40619,1689426956776; zookeeper connection closed. 2023-07-15 13:16:00,520 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): regionserver:40619-0x101692013650002, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:00,520 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@23ce9a70] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@23ce9a70 2023-07-15 13:16:00,520 INFO [Listener at localhost/41271] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-15 13:16:00,520 WARN [Listener at localhost/41271] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 13:16:00,524 INFO [Listener at localhost/41271] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 13:16:00,630 WARN [BP-1901408927-172.31.14.131-1689426955566 heartbeating to localhost/127.0.0.1:34061] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-15 13:16:00,630 WARN [BP-1901408927-172.31.14.131-1689426955566 heartbeating to localhost/127.0.0.1:34061] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1901408927-172.31.14.131-1689426955566 (Datanode Uuid b7331fe9-5fad-4222-8186-e073ac1a9d01) service to localhost/127.0.0.1:34061 2023-07-15 13:16:00,630 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/cluster_a8f5af24-9fe6-0cda-6638-0e398d161cda/dfs/data/data5/current/BP-1901408927-172.31.14.131-1689426955566] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:16:00,631 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/cluster_a8f5af24-9fe6-0cda-6638-0e398d161cda/dfs/data/data6/current/BP-1901408927-172.31.14.131-1689426955566] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:16:00,632 WARN [Listener at localhost/41271] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 13:16:00,635 INFO [Listener at localhost/41271] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 13:16:00,740 WARN [BP-1901408927-172.31.14.131-1689426955566 heartbeating to localhost/127.0.0.1:34061] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-15 13:16:00,740 WARN [BP-1901408927-172.31.14.131-1689426955566 heartbeating to localhost/127.0.0.1:34061] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1901408927-172.31.14.131-1689426955566 (Datanode Uuid 8241a952-6f63-402e-af90-f07638c938ea) service to localhost/127.0.0.1:34061 2023-07-15 13:16:00,741 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/cluster_a8f5af24-9fe6-0cda-6638-0e398d161cda/dfs/data/data3/current/BP-1901408927-172.31.14.131-1689426955566] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:16:00,741 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/cluster_a8f5af24-9fe6-0cda-6638-0e398d161cda/dfs/data/data4/current/BP-1901408927-172.31.14.131-1689426955566] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:16:00,742 WARN [Listener at localhost/41271] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 13:16:00,745 INFO [Listener at localhost/41271] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 13:16:00,848 WARN [BP-1901408927-172.31.14.131-1689426955566 heartbeating to localhost/127.0.0.1:34061] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-15 13:16:00,848 WARN [BP-1901408927-172.31.14.131-1689426955566 heartbeating to localhost/127.0.0.1:34061] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1901408927-172.31.14.131-1689426955566 (Datanode Uuid 61846601-6be8-45f0-9aa7-87a6dc8df716) service to localhost/127.0.0.1:34061 2023-07-15 13:16:00,849 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/cluster_a8f5af24-9fe6-0cda-6638-0e398d161cda/dfs/data/data1/current/BP-1901408927-172.31.14.131-1689426955566] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:16:00,849 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/cluster_a8f5af24-9fe6-0cda-6638-0e398d161cda/dfs/data/data2/current/BP-1901408927-172.31.14.131-1689426955566] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:16:00,859 INFO [Listener at localhost/41271] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 13:16:00,975 INFO [Listener at localhost/41271] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-15 13:16:01,004 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-15 13:16:01,004 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-15 13:16:01,004 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/hadoop.log.dir so I do NOT create it in target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049 2023-07-15 13:16:01,004 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5d9c824c-4370-2116-8258-a72b72306248/hadoop.tmp.dir so I do NOT create it in target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049 2023-07-15 13:16:01,005 INFO [Listener at localhost/41271] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9, deleteOnExit=true 2023-07-15 13:16:01,005 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-15 13:16:01,005 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/test.cache.data in system properties and HBase conf 2023-07-15 13:16:01,005 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/hadoop.tmp.dir in system properties and HBase conf 2023-07-15 13:16:01,005 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/hadoop.log.dir in system properties and HBase conf 2023-07-15 13:16:01,005 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-15 13:16:01,005 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-15 13:16:01,005 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-15 13:16:01,005 DEBUG [Listener at localhost/41271] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-15 13:16:01,006 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-15 13:16:01,006 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-15 13:16:01,006 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-15 13:16:01,006 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-15 13:16:01,006 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-15 13:16:01,006 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-15 13:16:01,006 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-15 13:16:01,006 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-15 13:16:01,006 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-15 13:16:01,006 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/nfs.dump.dir in system properties and HBase conf 2023-07-15 13:16:01,007 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/java.io.tmpdir in system properties and HBase conf 2023-07-15 13:16:01,007 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-15 13:16:01,007 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-15 13:16:01,007 INFO [Listener at localhost/41271] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-15 13:16:01,011 WARN [Listener at localhost/41271] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-15 13:16:01,011 WARN [Listener at localhost/41271] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-15 13:16:01,055 WARN [Listener at localhost/41271] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 13:16:01,057 INFO [Listener at localhost/41271] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 13:16:01,062 INFO [Listener at localhost/41271] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/java.io.tmpdir/Jetty_localhost_38085_hdfs____x34vo2/webapp 2023-07-15 13:16:01,073 DEBUG [Listener at localhost/41271-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10169201365000a, quorum=127.0.0.1:62025, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-15 13:16:01,073 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10169201365000a, quorum=127.0.0.1:62025, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-15 13:16:01,155 INFO [Listener at localhost/41271] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38085 2023-07-15 13:16:01,160 WARN [Listener at localhost/41271] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-15 13:16:01,160 WARN [Listener at localhost/41271] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-15 13:16:01,202 WARN [Listener at localhost/40589] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 13:16:01,215 WARN [Listener at localhost/40589] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 13:16:01,217 WARN [Listener at localhost/40589] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 13:16:01,218 INFO [Listener at localhost/40589] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 13:16:01,226 INFO [Listener at localhost/40589] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/java.io.tmpdir/Jetty_localhost_41459_datanode____iy5iua/webapp 2023-07-15 13:16:01,318 INFO [Listener at localhost/40589] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41459 2023-07-15 13:16:01,326 WARN [Listener at localhost/40709] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 13:16:01,343 WARN [Listener at localhost/40709] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 13:16:01,345 WARN [Listener at localhost/40709] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 13:16:01,347 INFO [Listener at localhost/40709] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 13:16:01,350 INFO [Listener at localhost/40709] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/java.io.tmpdir/Jetty_localhost_33245_datanode____sm8qii/webapp 2023-07-15 13:16:01,428 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x13b2b89a090d0d46: Processing first storage report for DS-44e314fd-6fce-4cae-bb0f-22828eac673a from datanode 67dee77f-f2d4-4f4e-a6c3-2404d30254ad 2023-07-15 13:16:01,428 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x13b2b89a090d0d46: from storage DS-44e314fd-6fce-4cae-bb0f-22828eac673a node DatanodeRegistration(127.0.0.1:37231, datanodeUuid=67dee77f-f2d4-4f4e-a6c3-2404d30254ad, infoPort=34837, infoSecurePort=0, ipcPort=40709, storageInfo=lv=-57;cid=testClusterID;nsid=279328163;c=1689426961014), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 13:16:01,428 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x13b2b89a090d0d46: Processing first storage report for DS-87af045d-f514-4f5c-8f6a-c579afc23de4 from datanode 67dee77f-f2d4-4f4e-a6c3-2404d30254ad 2023-07-15 13:16:01,428 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x13b2b89a090d0d46: from storage DS-87af045d-f514-4f5c-8f6a-c579afc23de4 node DatanodeRegistration(127.0.0.1:37231, datanodeUuid=67dee77f-f2d4-4f4e-a6c3-2404d30254ad, infoPort=34837, infoSecurePort=0, ipcPort=40709, storageInfo=lv=-57;cid=testClusterID;nsid=279328163;c=1689426961014), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 13:16:01,476 INFO [Listener at localhost/40709] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33245 2023-07-15 13:16:01,484 WARN [Listener at localhost/37083] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 13:16:01,506 WARN [Listener at localhost/37083] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 13:16:01,510 WARN [Listener at localhost/37083] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 13:16:01,512 INFO [Listener at localhost/37083] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 13:16:01,516 INFO [Listener at localhost/37083] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/java.io.tmpdir/Jetty_localhost_45597_datanode____.dmqp7r/webapp 2023-07-15 13:16:01,579 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x473d3ffc733c4355: Processing first storage report for DS-359df213-6f3a-4ec6-96b6-a19146c9cae7 from datanode 9a63e9f3-235e-4a02-b774-9a6355d3e7a3 2023-07-15 13:16:01,579 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x473d3ffc733c4355: from storage DS-359df213-6f3a-4ec6-96b6-a19146c9cae7 node DatanodeRegistration(127.0.0.1:41935, datanodeUuid=9a63e9f3-235e-4a02-b774-9a6355d3e7a3, infoPort=39081, infoSecurePort=0, ipcPort=37083, storageInfo=lv=-57;cid=testClusterID;nsid=279328163;c=1689426961014), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 13:16:01,579 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x473d3ffc733c4355: Processing first storage report for DS-4a892b37-1a6f-4330-9f82-b3e01a67b463 from datanode 9a63e9f3-235e-4a02-b774-9a6355d3e7a3 2023-07-15 13:16:01,579 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x473d3ffc733c4355: from storage DS-4a892b37-1a6f-4330-9f82-b3e01a67b463 node DatanodeRegistration(127.0.0.1:41935, datanodeUuid=9a63e9f3-235e-4a02-b774-9a6355d3e7a3, infoPort=39081, infoSecurePort=0, ipcPort=37083, storageInfo=lv=-57;cid=testClusterID;nsid=279328163;c=1689426961014), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 13:16:01,626 INFO [Listener at localhost/37083] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45597 2023-07-15 13:16:01,634 WARN [Listener at localhost/36623] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 13:16:01,728 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1215298ed98bad36: Processing first storage report for DS-a6159b29-4eeb-47a8-8382-f70ed04a3e02 from datanode 93fc05bd-b830-4238-ad1d-c4ba32e91a23 2023-07-15 13:16:01,728 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1215298ed98bad36: from storage DS-a6159b29-4eeb-47a8-8382-f70ed04a3e02 node DatanodeRegistration(127.0.0.1:43853, datanodeUuid=93fc05bd-b830-4238-ad1d-c4ba32e91a23, infoPort=46469, infoSecurePort=0, ipcPort=36623, storageInfo=lv=-57;cid=testClusterID;nsid=279328163;c=1689426961014), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-15 13:16:01,729 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1215298ed98bad36: Processing first storage report for DS-ac764693-ed1e-4303-894a-f4129ddf639e from datanode 93fc05bd-b830-4238-ad1d-c4ba32e91a23 2023-07-15 13:16:01,729 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1215298ed98bad36: from storage DS-ac764693-ed1e-4303-894a-f4129ddf639e node DatanodeRegistration(127.0.0.1:43853, datanodeUuid=93fc05bd-b830-4238-ad1d-c4ba32e91a23, infoPort=46469, infoSecurePort=0, ipcPort=36623, storageInfo=lv=-57;cid=testClusterID;nsid=279328163;c=1689426961014), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 13:16:01,742 DEBUG [Listener at localhost/36623] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049 2023-07-15 13:16:01,744 INFO [Listener at localhost/36623] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/zookeeper_0, clientPort=62891, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-15 13:16:01,745 INFO [Listener at localhost/36623] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62891 2023-07-15 13:16:01,746 INFO [Listener at localhost/36623] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:16:01,746 INFO [Listener at localhost/36623] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:16:01,763 INFO [Listener at localhost/36623] util.FSUtils(471): Created version file at hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221 with version=8 2023-07-15 13:16:01,763 INFO [Listener at localhost/36623] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:42517/user/jenkins/test-data/e2da1ff8-e6cf-538f-c7b1-e66a894cf6fe/hbase-staging 2023-07-15 13:16:01,764 DEBUG [Listener at localhost/36623] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-15 13:16:01,764 DEBUG [Listener at localhost/36623] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-15 13:16:01,764 DEBUG [Listener at localhost/36623] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-15 13:16:01,764 DEBUG [Listener at localhost/36623] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-15 13:16:01,765 INFO [Listener at localhost/36623] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 13:16:01,765 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:16:01,765 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 13:16:01,766 INFO [Listener at localhost/36623] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 13:16:01,766 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:16:01,766 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 13:16:01,766 INFO [Listener at localhost/36623] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 13:16:01,768 INFO [Listener at localhost/36623] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37219 2023-07-15 13:16:01,768 INFO [Listener at localhost/36623] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:16:01,769 INFO [Listener at localhost/36623] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:16:01,770 INFO [Listener at localhost/36623] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37219 connecting to ZooKeeper ensemble=127.0.0.1:62891 2023-07-15 13:16:01,776 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:372190x0, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 13:16:01,778 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37219-0x101692028530000 connected 2023-07-15 13:16:01,793 DEBUG [Listener at localhost/36623] zookeeper.ZKUtil(164): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 13:16:01,793 DEBUG [Listener at localhost/36623] zookeeper.ZKUtil(164): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:16:01,794 DEBUG [Listener at localhost/36623] zookeeper.ZKUtil(164): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 13:16:01,796 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37219 2023-07-15 13:16:01,797 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37219 2023-07-15 13:16:01,798 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37219 2023-07-15 13:16:01,801 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37219 2023-07-15 13:16:01,801 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37219 2023-07-15 13:16:01,802 INFO [Listener at localhost/36623] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 13:16:01,803 INFO [Listener at localhost/36623] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 13:16:01,803 INFO [Listener at localhost/36623] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 13:16:01,803 INFO [Listener at localhost/36623] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-15 13:16:01,803 INFO [Listener at localhost/36623] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 13:16:01,803 INFO [Listener at localhost/36623] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 13:16:01,803 INFO [Listener at localhost/36623] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 13:16:01,804 INFO [Listener at localhost/36623] http.HttpServer(1146): Jetty bound to port 32893 2023-07-15 13:16:01,804 INFO [Listener at localhost/36623] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 13:16:01,805 INFO [Listener at localhost/36623] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:16:01,805 INFO [Listener at localhost/36623] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@592e1b3c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/hadoop.log.dir/,AVAILABLE} 2023-07-15 13:16:01,806 INFO [Listener at localhost/36623] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:16:01,806 INFO [Listener at localhost/36623] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@f6563c8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-15 13:16:01,921 INFO [Listener at localhost/36623] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 13:16:01,922 INFO [Listener at localhost/36623] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 13:16:01,922 INFO [Listener at localhost/36623] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 13:16:01,922 INFO [Listener at localhost/36623] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-15 13:16:01,923 INFO [Listener at localhost/36623] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:16:01,924 INFO [Listener at localhost/36623] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@15638a64{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/java.io.tmpdir/jetty-0_0_0_0-32893-hbase-server-2_4_18-SNAPSHOT_jar-_-any-787360675649364872/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-15 13:16:01,926 INFO [Listener at localhost/36623] server.AbstractConnector(333): Started ServerConnector@5e57cc40{HTTP/1.1, (http/1.1)}{0.0.0.0:32893} 2023-07-15 13:16:01,926 INFO [Listener at localhost/36623] server.Server(415): Started @43752ms 2023-07-15 13:16:01,926 INFO [Listener at localhost/36623] master.HMaster(444): hbase.rootdir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221, hbase.cluster.distributed=false 2023-07-15 13:16:01,939 INFO [Listener at localhost/36623] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 13:16:01,939 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:16:01,939 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 13:16:01,939 INFO [Listener at localhost/36623] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 13:16:01,939 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:16:01,940 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 13:16:01,940 INFO [Listener at localhost/36623] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 13:16:01,940 INFO [Listener at localhost/36623] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46785 2023-07-15 13:16:01,941 INFO [Listener at localhost/36623] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 13:16:01,942 DEBUG [Listener at localhost/36623] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 13:16:01,942 INFO [Listener at localhost/36623] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:16:01,943 INFO [Listener at localhost/36623] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:16:01,944 INFO [Listener at localhost/36623] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46785 connecting to ZooKeeper ensemble=127.0.0.1:62891 2023-07-15 13:16:01,947 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:467850x0, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 13:16:01,949 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46785-0x101692028530001 connected 2023-07-15 13:16:01,949 DEBUG [Listener at localhost/36623] zookeeper.ZKUtil(164): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 13:16:01,949 DEBUG [Listener at localhost/36623] zookeeper.ZKUtil(164): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:16:01,950 DEBUG [Listener at localhost/36623] zookeeper.ZKUtil(164): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 13:16:01,950 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46785 2023-07-15 13:16:01,951 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46785 2023-07-15 13:16:01,952 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46785 2023-07-15 13:16:01,955 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46785 2023-07-15 13:16:01,958 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46785 2023-07-15 13:16:01,960 INFO [Listener at localhost/36623] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 13:16:01,960 INFO [Listener at localhost/36623] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 13:16:01,960 INFO [Listener at localhost/36623] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 13:16:01,961 INFO [Listener at localhost/36623] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 13:16:01,961 INFO [Listener at localhost/36623] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 13:16:01,961 INFO [Listener at localhost/36623] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 13:16:01,961 INFO [Listener at localhost/36623] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 13:16:01,961 INFO [Listener at localhost/36623] http.HttpServer(1146): Jetty bound to port 37321 2023-07-15 13:16:01,962 INFO [Listener at localhost/36623] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 13:16:01,963 INFO [Listener at localhost/36623] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:16:01,963 INFO [Listener at localhost/36623] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6f0b8925{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/hadoop.log.dir/,AVAILABLE} 2023-07-15 13:16:01,964 INFO [Listener at localhost/36623] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:16:01,964 INFO [Listener at localhost/36623] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4e2c1e18{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-15 13:16:02,077 INFO [Listener at localhost/36623] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 13:16:02,078 INFO [Listener at localhost/36623] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 13:16:02,079 INFO [Listener at localhost/36623] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 13:16:02,079 INFO [Listener at localhost/36623] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-15 13:16:02,080 INFO [Listener at localhost/36623] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:16:02,081 INFO [Listener at localhost/36623] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6fba47ee{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/java.io.tmpdir/jetty-0_0_0_0-37321-hbase-server-2_4_18-SNAPSHOT_jar-_-any-9192316045989584419/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:16:02,082 INFO [Listener at localhost/36623] server.AbstractConnector(333): Started ServerConnector@68f3a1b3{HTTP/1.1, (http/1.1)}{0.0.0.0:37321} 2023-07-15 13:16:02,082 INFO [Listener at localhost/36623] server.Server(415): Started @43909ms 2023-07-15 13:16:02,095 INFO [Listener at localhost/36623] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 13:16:02,095 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:16:02,095 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 13:16:02,095 INFO [Listener at localhost/36623] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 13:16:02,095 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:16:02,095 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 13:16:02,095 INFO [Listener at localhost/36623] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 13:16:02,096 INFO [Listener at localhost/36623] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42269 2023-07-15 13:16:02,096 INFO [Listener at localhost/36623] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 13:16:02,098 DEBUG [Listener at localhost/36623] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 13:16:02,098 INFO [Listener at localhost/36623] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:16:02,099 INFO [Listener at localhost/36623] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:16:02,100 INFO [Listener at localhost/36623] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42269 connecting to ZooKeeper ensemble=127.0.0.1:62891 2023-07-15 13:16:02,104 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:422690x0, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 13:16:02,106 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42269-0x101692028530002 connected 2023-07-15 13:16:02,106 DEBUG [Listener at localhost/36623] zookeeper.ZKUtil(164): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 13:16:02,106 DEBUG [Listener at localhost/36623] zookeeper.ZKUtil(164): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:16:02,107 DEBUG [Listener at localhost/36623] zookeeper.ZKUtil(164): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 13:16:02,107 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42269 2023-07-15 13:16:02,107 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42269 2023-07-15 13:16:02,110 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42269 2023-07-15 13:16:02,111 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42269 2023-07-15 13:16:02,111 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42269 2023-07-15 13:16:02,113 INFO [Listener at localhost/36623] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 13:16:02,113 INFO [Listener at localhost/36623] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 13:16:02,113 INFO [Listener at localhost/36623] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 13:16:02,113 INFO [Listener at localhost/36623] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 13:16:02,113 INFO [Listener at localhost/36623] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 13:16:02,113 INFO [Listener at localhost/36623] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 13:16:02,114 INFO [Listener at localhost/36623] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 13:16:02,114 INFO [Listener at localhost/36623] http.HttpServer(1146): Jetty bound to port 37465 2023-07-15 13:16:02,114 INFO [Listener at localhost/36623] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 13:16:02,117 INFO [Listener at localhost/36623] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:16:02,117 INFO [Listener at localhost/36623] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@c284dc1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/hadoop.log.dir/,AVAILABLE} 2023-07-15 13:16:02,117 INFO [Listener at localhost/36623] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:16:02,117 INFO [Listener at localhost/36623] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@b79c49e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-15 13:16:02,230 INFO [Listener at localhost/36623] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 13:16:02,230 INFO [Listener at localhost/36623] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 13:16:02,230 INFO [Listener at localhost/36623] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 13:16:02,231 INFO [Listener at localhost/36623] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-15 13:16:02,231 INFO [Listener at localhost/36623] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:16:02,232 INFO [Listener at localhost/36623] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@c8c2101{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/java.io.tmpdir/jetty-0_0_0_0-37465-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5648573606303612494/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:16:02,234 INFO [Listener at localhost/36623] server.AbstractConnector(333): Started ServerConnector@778f6ca0{HTTP/1.1, (http/1.1)}{0.0.0.0:37465} 2023-07-15 13:16:02,235 INFO [Listener at localhost/36623] server.Server(415): Started @44061ms 2023-07-15 13:16:02,246 INFO [Listener at localhost/36623] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 13:16:02,246 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:16:02,246 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 13:16:02,246 INFO [Listener at localhost/36623] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 13:16:02,246 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:16:02,247 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 13:16:02,247 INFO [Listener at localhost/36623] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 13:16:02,247 INFO [Listener at localhost/36623] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37449 2023-07-15 13:16:02,248 INFO [Listener at localhost/36623] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 13:16:02,249 DEBUG [Listener at localhost/36623] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 13:16:02,249 INFO [Listener at localhost/36623] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:16:02,250 INFO [Listener at localhost/36623] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:16:02,251 INFO [Listener at localhost/36623] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37449 connecting to ZooKeeper ensemble=127.0.0.1:62891 2023-07-15 13:16:02,254 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:374490x0, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 13:16:02,255 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37449-0x101692028530003 connected 2023-07-15 13:16:02,256 DEBUG [Listener at localhost/36623] zookeeper.ZKUtil(164): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 13:16:02,256 DEBUG [Listener at localhost/36623] zookeeper.ZKUtil(164): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:16:02,256 DEBUG [Listener at localhost/36623] zookeeper.ZKUtil(164): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 13:16:02,257 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37449 2023-07-15 13:16:02,257 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37449 2023-07-15 13:16:02,257 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37449 2023-07-15 13:16:02,257 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37449 2023-07-15 13:16:02,258 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37449 2023-07-15 13:16:02,259 INFO [Listener at localhost/36623] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 13:16:02,259 INFO [Listener at localhost/36623] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 13:16:02,259 INFO [Listener at localhost/36623] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 13:16:02,260 INFO [Listener at localhost/36623] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 13:16:02,260 INFO [Listener at localhost/36623] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 13:16:02,260 INFO [Listener at localhost/36623] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 13:16:02,260 INFO [Listener at localhost/36623] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 13:16:02,260 INFO [Listener at localhost/36623] http.HttpServer(1146): Jetty bound to port 40757 2023-07-15 13:16:02,261 INFO [Listener at localhost/36623] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 13:16:02,263 INFO [Listener at localhost/36623] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:16:02,263 INFO [Listener at localhost/36623] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4ef56e43{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/hadoop.log.dir/,AVAILABLE} 2023-07-15 13:16:02,264 INFO [Listener at localhost/36623] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:16:02,264 INFO [Listener at localhost/36623] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@33138f5e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-15 13:16:02,376 INFO [Listener at localhost/36623] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 13:16:02,377 INFO [Listener at localhost/36623] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 13:16:02,377 INFO [Listener at localhost/36623] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 13:16:02,377 INFO [Listener at localhost/36623] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-15 13:16:02,378 INFO [Listener at localhost/36623] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:16:02,379 INFO [Listener at localhost/36623] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4d9e1b0c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/java.io.tmpdir/jetty-0_0_0_0-40757-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2687561968223918408/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:16:02,380 INFO [Listener at localhost/36623] server.AbstractConnector(333): Started ServerConnector@788676c3{HTTP/1.1, (http/1.1)}{0.0.0.0:40757} 2023-07-15 13:16:02,381 INFO [Listener at localhost/36623] server.Server(415): Started @44207ms 2023-07-15 13:16:02,383 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 13:16:02,386 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@b3e4f97{HTTP/1.1, (http/1.1)}{0.0.0.0:40951} 2023-07-15 13:16:02,386 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @44212ms 2023-07-15 13:16:02,386 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,37219,1689426961765 2023-07-15 13:16:02,388 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-15 13:16:02,388 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,37219,1689426961765 2023-07-15 13:16:02,390 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 13:16:02,390 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 13:16:02,390 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 13:16:02,390 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 13:16:02,390 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:16:02,393 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-15 13:16:02,394 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-15 13:16:02,394 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,37219,1689426961765 from backup master directory 2023-07-15 13:16:02,395 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,37219,1689426961765 2023-07-15 13:16:02,395 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-15 13:16:02,395 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 13:16:02,395 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,37219,1689426961765 2023-07-15 13:16:02,414 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/hbase.id with ID: 168658bf-d77c-40c2-a3cf-b9378aa6360a 2023-07-15 13:16:02,425 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:16:02,428 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:16:02,441 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x528e77ff to 127.0.0.1:62891 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:16:02,446 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@740aeb3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:16:02,447 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:16:02,447 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-15 13:16:02,447 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:16:02,449 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/MasterData/data/master/store-tmp 2023-07-15 13:16:02,468 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:16:02,468 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-15 13:16:02,468 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:16:02,468 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:16:02,468 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-15 13:16:02,468 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:16:02,468 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:16:02,468 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 13:16:02,469 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/MasterData/WALs/jenkins-hbase4.apache.org,37219,1689426961765 2023-07-15 13:16:02,472 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37219%2C1689426961765, suffix=, logDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/MasterData/WALs/jenkins-hbase4.apache.org,37219,1689426961765, archiveDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/MasterData/oldWALs, maxLogs=10 2023-07-15 13:16:02,497 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43853,DS-a6159b29-4eeb-47a8-8382-f70ed04a3e02,DISK] 2023-07-15 13:16:02,500 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37231,DS-44e314fd-6fce-4cae-bb0f-22828eac673a,DISK] 2023-07-15 13:16:02,512 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41935,DS-359df213-6f3a-4ec6-96b6-a19146c9cae7,DISK] 2023-07-15 13:16:02,515 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/MasterData/WALs/jenkins-hbase4.apache.org,37219,1689426961765/jenkins-hbase4.apache.org%2C37219%2C1689426961765.1689426962472 2023-07-15 13:16:02,518 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43853,DS-a6159b29-4eeb-47a8-8382-f70ed04a3e02,DISK], DatanodeInfoWithStorage[127.0.0.1:41935,DS-359df213-6f3a-4ec6-96b6-a19146c9cae7,DISK], DatanodeInfoWithStorage[127.0.0.1:37231,DS-44e314fd-6fce-4cae-bb0f-22828eac673a,DISK]] 2023-07-15 13:16:02,519 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:16:02,519 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:16:02,519 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:16:02,519 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:16:02,524 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:16:02,525 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-15 13:16:02,526 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-15 13:16:02,527 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:16:02,532 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:16:02,532 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:16:02,535 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 13:16:02,537 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:16:02,537 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11147035680, jitterRate=0.03814859688282013}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:16:02,537 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 13:16:02,537 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-15 13:16:02,539 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-15 13:16:02,539 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-15 13:16:02,539 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-15 13:16:02,539 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-15 13:16:02,539 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-15 13:16:02,539 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-15 13:16:02,541 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-15 13:16:02,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-15 13:16:02,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-15 13:16:02,543 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-15 13:16:02,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-15 13:16:02,549 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:16:02,550 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-15 13:16:02,550 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-15 13:16:02,551 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-15 13:16:02,552 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 13:16:02,552 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:16:02,552 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 13:16:02,552 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,37219,1689426961765, sessionid=0x101692028530000, setting cluster-up flag (Was=false) 2023-07-15 13:16:02,554 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 13:16:02,555 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 13:16:02,558 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:16:02,563 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-15 13:16:02,564 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37219,1689426961765 2023-07-15 13:16:02,568 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:16:02,571 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-15 13:16:02,572 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37219,1689426961765 2023-07-15 13:16:02,573 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.hbase-snapshot/.tmp 2023-07-15 13:16:02,573 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-15 13:16:02,573 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-15 13:16:02,575 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37219,1689426961765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 13:16:02,575 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-15 13:16:02,575 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-15 13:16:02,576 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-15 13:16:02,583 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(951): ClusterId : 168658bf-d77c-40c2-a3cf-b9378aa6360a 2023-07-15 13:16:02,584 DEBUG [RS:0;jenkins-hbase4:46785] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 13:16:02,584 INFO [RS:1;jenkins-hbase4:42269] regionserver.HRegionServer(951): ClusterId : 168658bf-d77c-40c2-a3cf-b9378aa6360a 2023-07-15 13:16:02,584 INFO [RS:2;jenkins-hbase4:37449] regionserver.HRegionServer(951): ClusterId : 168658bf-d77c-40c2-a3cf-b9378aa6360a 2023-07-15 13:16:02,587 DEBUG [RS:2;jenkins-hbase4:37449] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 13:16:02,587 DEBUG [RS:1;jenkins-hbase4:42269] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 13:16:02,589 DEBUG [RS:2;jenkins-hbase4:37449] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 13:16:02,589 DEBUG [RS:0;jenkins-hbase4:46785] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 13:16:02,589 DEBUG [RS:2;jenkins-hbase4:37449] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 13:16:02,589 DEBUG [RS:0;jenkins-hbase4:46785] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 13:16:02,591 DEBUG [RS:0;jenkins-hbase4:46785] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 13:16:02,593 DEBUG [RS:0;jenkins-hbase4:46785] zookeeper.ReadOnlyZKClient(139): Connect 0x4ca1657c to 127.0.0.1:62891 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:16:02,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-15 13:16:02,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-15 13:16:02,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-15 13:16:02,597 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-15 13:16:02,597 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 13:16:02,597 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 13:16:02,597 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 13:16:02,597 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 13:16:02,597 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-15 13:16:02,597 DEBUG [RS:1;jenkins-hbase4:42269] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 13:16:02,597 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,597 DEBUG [RS:1;jenkins-hbase4:42269] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 13:16:02,598 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 13:16:02,598 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,598 DEBUG [RS:2;jenkins-hbase4:37449] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 13:16:02,600 DEBUG [RS:1;jenkins-hbase4:42269] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 13:16:02,612 DEBUG [RS:0;jenkins-hbase4:46785] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@42668298, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:16:02,612 DEBUG [RS:0;jenkins-hbase4:46785] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@46590e0c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 13:16:02,613 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689426992613 2023-07-15 13:16:02,613 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-15 13:16:02,613 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-15 13:16:02,613 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-15 13:16:02,613 DEBUG [RS:2;jenkins-hbase4:37449] zookeeper.ReadOnlyZKClient(139): Connect 0x1eb5301d to 127.0.0.1:62891 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:16:02,614 DEBUG [RS:1;jenkins-hbase4:42269] zookeeper.ReadOnlyZKClient(139): Connect 0x7e813f95 to 127.0.0.1:62891 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:16:02,613 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-15 13:16:02,614 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-15 13:16:02,614 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-15 13:16:02,618 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-15 13:16:02,618 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,618 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-15 13:16:02,619 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-15 13:16:02,619 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-15 13:16:02,619 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-15 13:16:02,621 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-15 13:16:02,627 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-15 13:16:02,627 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-15 13:16:02,631 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46785 2023-07-15 13:16:02,631 INFO [RS:0;jenkins-hbase4:46785] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 13:16:02,631 INFO [RS:0;jenkins-hbase4:46785] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 13:16:02,631 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 13:16:02,632 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37219,1689426961765 with isa=jenkins-hbase4.apache.org/172.31.14.131:46785, startcode=1689426961939 2023-07-15 13:16:02,632 DEBUG [RS:0;jenkins-hbase4:46785] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 13:16:02,634 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689426962631,5,FailOnTimeoutGroup] 2023-07-15 13:16:02,638 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689426962635,5,FailOnTimeoutGroup] 2023-07-15 13:16:02,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-15 13:16:02,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,649 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58245, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 13:16:02,652 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37219] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:02,653 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37219,1689426961765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 13:16:02,653 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37219,1689426961765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-15 13:16:02,654 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221 2023-07-15 13:16:02,654 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40589 2023-07-15 13:16:02,654 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=32893 2023-07-15 13:16:02,654 DEBUG [RS:1;jenkins-hbase4:42269] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7abea49d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:16:02,654 DEBUG [RS:1;jenkins-hbase4:42269] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3fbc35ae, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 13:16:02,654 DEBUG [RS:2;jenkins-hbase4:37449] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31a62982, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:16:02,655 DEBUG [RS:2;jenkins-hbase4:37449] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f616c9a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 13:16:02,655 DEBUG [RS:0;jenkins-hbase4:46785] zookeeper.ZKUtil(162): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:02,655 WARN [RS:0;jenkins-hbase4:46785] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 13:16:02,656 INFO [RS:0;jenkins-hbase4:46785] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:16:02,656 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/WALs/jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:02,663 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:16:02,667 DEBUG [RS:1;jenkins-hbase4:42269] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:42269 2023-07-15 13:16:02,667 INFO [RS:1;jenkins-hbase4:42269] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 13:16:02,667 INFO [RS:1;jenkins-hbase4:42269] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 13:16:02,667 DEBUG [RS:1;jenkins-hbase4:42269] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 13:16:02,669 INFO [RS:1;jenkins-hbase4:42269] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37219,1689426961765 with isa=jenkins-hbase4.apache.org/172.31.14.131:42269, startcode=1689426962094 2023-07-15 13:16:02,670 DEBUG [RS:1;jenkins-hbase4:42269] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 13:16:02,672 DEBUG [RS:2;jenkins-hbase4:37449] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:37449 2023-07-15 13:16:02,672 INFO [RS:2;jenkins-hbase4:37449] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 13:16:02,672 INFO [RS:2;jenkins-hbase4:37449] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 13:16:02,672 DEBUG [RS:2;jenkins-hbase4:37449] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 13:16:02,673 INFO [RS:2;jenkins-hbase4:37449] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37219,1689426961765 with isa=jenkins-hbase4.apache.org/172.31.14.131:37449, startcode=1689426962246 2023-07-15 13:16:02,673 DEBUG [RS:2;jenkins-hbase4:37449] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 13:16:02,677 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46785,1689426961939] 2023-07-15 13:16:02,684 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39757, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 13:16:02,684 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37219] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:02,684 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37219,1689426961765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 13:16:02,684 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37219,1689426961765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-15 13:16:02,685 DEBUG [RS:2;jenkins-hbase4:37449] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221 2023-07-15 13:16:02,685 DEBUG [RS:2;jenkins-hbase4:37449] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40589 2023-07-15 13:16:02,685 DEBUG [RS:2;jenkins-hbase4:37449] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=32893 2023-07-15 13:16:02,685 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60775, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 13:16:02,685 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37219] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42269,1689426962094 2023-07-15 13:16:02,686 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37219,1689426961765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 13:16:02,686 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37219,1689426961765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-15 13:16:02,686 DEBUG [RS:1;jenkins-hbase4:42269] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221 2023-07-15 13:16:02,686 DEBUG [RS:1;jenkins-hbase4:42269] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40589 2023-07-15 13:16:02,686 DEBUG [RS:1;jenkins-hbase4:42269] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=32893 2023-07-15 13:16:02,692 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:16:02,693 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37449,1689426962246] 2023-07-15 13:16:02,693 DEBUG [RS:2;jenkins-hbase4:37449] zookeeper.ZKUtil(162): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:02,693 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42269,1689426962094] 2023-07-15 13:16:02,693 WARN [RS:2;jenkins-hbase4:37449] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 13:16:02,693 DEBUG [RS:1;jenkins-hbase4:42269] zookeeper.ZKUtil(162): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42269,1689426962094 2023-07-15 13:16:02,693 INFO [RS:2;jenkins-hbase4:37449] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:16:02,693 WARN [RS:1;jenkins-hbase4:42269] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 13:16:02,694 INFO [RS:1;jenkins-hbase4:42269] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:16:02,694 DEBUG [RS:0;jenkins-hbase4:46785] zookeeper.ZKUtil(162): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42269,1689426962094 2023-07-15 13:16:02,694 DEBUG [RS:1;jenkins-hbase4:42269] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/WALs/jenkins-hbase4.apache.org,42269,1689426962094 2023-07-15 13:16:02,694 DEBUG [RS:0;jenkins-hbase4:46785] zookeeper.ZKUtil(162): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:02,695 DEBUG [RS:2;jenkins-hbase4:37449] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/WALs/jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:02,696 DEBUG [RS:0;jenkins-hbase4:46785] zookeeper.ZKUtil(162): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:02,697 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 13:16:02,697 INFO [RS:0;jenkins-hbase4:46785] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 13:16:02,707 INFO [RS:0;jenkins-hbase4:46785] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 13:16:02,720 INFO [RS:0;jenkins-hbase4:46785] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 13:16:02,720 INFO [RS:0;jenkins-hbase4:46785] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,721 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 13:16:02,727 INFO [RS:0;jenkins-hbase4:46785] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,728 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,728 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,728 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,728 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,728 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,728 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 13:16:02,728 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,728 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,728 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,728 DEBUG [RS:0;jenkins-hbase4:46785] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,730 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-15 13:16:02,731 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-15 13:16:02,731 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221 2023-07-15 13:16:02,741 DEBUG [RS:1;jenkins-hbase4:42269] zookeeper.ZKUtil(162): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42269,1689426962094 2023-07-15 13:16:02,742 DEBUG [RS:1;jenkins-hbase4:42269] zookeeper.ZKUtil(162): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:02,742 DEBUG [RS:1;jenkins-hbase4:42269] zookeeper.ZKUtil(162): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:02,744 DEBUG [RS:1;jenkins-hbase4:42269] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 13:16:02,744 INFO [RS:1;jenkins-hbase4:42269] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 13:16:02,747 INFO [RS:0;jenkins-hbase4:46785] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,749 INFO [RS:0;jenkins-hbase4:46785] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,749 INFO [RS:0;jenkins-hbase4:46785] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,750 INFO [RS:1;jenkins-hbase4:42269] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 13:16:02,752 DEBUG [RS:2;jenkins-hbase4:37449] zookeeper.ZKUtil(162): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42269,1689426962094 2023-07-15 13:16:02,752 DEBUG [RS:2;jenkins-hbase4:37449] zookeeper.ZKUtil(162): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:02,752 DEBUG [RS:2;jenkins-hbase4:37449] zookeeper.ZKUtil(162): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:02,753 DEBUG [RS:2;jenkins-hbase4:37449] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 13:16:02,753 INFO [RS:2;jenkins-hbase4:37449] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 13:16:02,762 INFO [RS:2;jenkins-hbase4:37449] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 13:16:02,764 INFO [RS:2;jenkins-hbase4:37449] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 13:16:02,765 INFO [RS:2;jenkins-hbase4:37449] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,765 INFO [RS:2;jenkins-hbase4:37449] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 13:16:02,767 INFO [RS:1;jenkins-hbase4:42269] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 13:16:02,767 INFO [RS:1;jenkins-hbase4:42269] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,774 INFO [RS:1;jenkins-hbase4:42269] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 13:16:02,774 INFO [RS:2;jenkins-hbase4:37449] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,774 DEBUG [RS:2;jenkins-hbase4:37449] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,775 DEBUG [RS:2;jenkins-hbase4:37449] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,775 DEBUG [RS:2;jenkins-hbase4:37449] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,775 DEBUG [RS:2;jenkins-hbase4:37449] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,775 INFO [RS:1;jenkins-hbase4:42269] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,776 DEBUG [RS:2;jenkins-hbase4:37449] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,779 DEBUG [RS:1;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,779 DEBUG [RS:1;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,779 DEBUG [RS:1;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,779 DEBUG [RS:1;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,779 DEBUG [RS:1;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,779 DEBUG [RS:1;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 13:16:02,779 DEBUG [RS:1;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,779 DEBUG [RS:1;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,779 DEBUG [RS:1;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,779 DEBUG [RS:1;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,779 DEBUG [RS:2;jenkins-hbase4:37449] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 13:16:02,779 DEBUG [RS:2;jenkins-hbase4:37449] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,779 DEBUG [RS:2;jenkins-hbase4:37449] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,779 DEBUG [RS:2;jenkins-hbase4:37449] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,779 DEBUG [RS:2;jenkins-hbase4:37449] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:02,782 INFO [RS:1;jenkins-hbase4:42269] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,783 INFO [RS:1;jenkins-hbase4:42269] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,783 INFO [RS:1;jenkins-hbase4:42269] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,784 INFO [RS:0;jenkins-hbase4:46785] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 13:16:02,784 INFO [RS:0;jenkins-hbase4:46785] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46785,1689426961939-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,858 INFO [RS:1;jenkins-hbase4:42269] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 13:16:02,858 INFO [RS:2;jenkins-hbase4:37449] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,858 INFO [RS:1;jenkins-hbase4:42269] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42269,1689426962094-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,858 INFO [RS:2;jenkins-hbase4:37449] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,858 INFO [RS:2;jenkins-hbase4:37449] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,858 INFO [RS:0;jenkins-hbase4:46785] regionserver.Replication(203): jenkins-hbase4.apache.org,46785,1689426961939 started 2023-07-15 13:16:02,859 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46785,1689426961939, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46785, sessionid=0x101692028530001 2023-07-15 13:16:02,863 DEBUG [RS:0;jenkins-hbase4:46785] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 13:16:02,863 DEBUG [RS:0;jenkins-hbase4:46785] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:02,863 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:16:02,863 DEBUG [RS:0;jenkins-hbase4:46785] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46785,1689426961939' 2023-07-15 13:16:02,863 DEBUG [RS:0;jenkins-hbase4:46785] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 13:16:02,864 DEBUG [RS:0;jenkins-hbase4:46785] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 13:16:02,865 DEBUG [RS:0;jenkins-hbase4:46785] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 13:16:02,865 DEBUG [RS:0;jenkins-hbase4:46785] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 13:16:02,865 DEBUG [RS:0;jenkins-hbase4:46785] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:02,865 DEBUG [RS:0;jenkins-hbase4:46785] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46785,1689426961939' 2023-07-15 13:16:02,865 DEBUG [RS:0;jenkins-hbase4:46785] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 13:16:02,866 DEBUG [RS:0;jenkins-hbase4:46785] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 13:16:02,866 DEBUG [RS:0;jenkins-hbase4:46785] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 13:16:02,866 INFO [RS:0;jenkins-hbase4:46785] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-15 13:16:02,866 INFO [RS:0;jenkins-hbase4:46785] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-15 13:16:02,871 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-15 13:16:02,873 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/info 2023-07-15 13:16:02,873 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-15 13:16:02,874 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:16:02,874 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-15 13:16:02,875 INFO [RS:2;jenkins-hbase4:37449] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 13:16:02,875 INFO [RS:2;jenkins-hbase4:37449] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37449,1689426962246-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:02,879 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/rep_barrier 2023-07-15 13:16:02,879 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-15 13:16:02,880 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:16:02,880 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-15 13:16:02,882 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/table 2023-07-15 13:16:02,882 INFO [RS:1;jenkins-hbase4:42269] regionserver.Replication(203): jenkins-hbase4.apache.org,42269,1689426962094 started 2023-07-15 13:16:02,882 INFO [RS:1;jenkins-hbase4:42269] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42269,1689426962094, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42269, sessionid=0x101692028530002 2023-07-15 13:16:02,882 DEBUG [RS:1;jenkins-hbase4:42269] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 13:16:02,882 DEBUG [RS:1;jenkins-hbase4:42269] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42269,1689426962094 2023-07-15 13:16:02,882 DEBUG [RS:1;jenkins-hbase4:42269] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42269,1689426962094' 2023-07-15 13:16:02,883 DEBUG [RS:1;jenkins-hbase4:42269] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 13:16:02,883 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-15 13:16:02,883 DEBUG [RS:1;jenkins-hbase4:42269] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 13:16:02,883 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:16:02,883 DEBUG [RS:1;jenkins-hbase4:42269] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 13:16:02,883 DEBUG [RS:1;jenkins-hbase4:42269] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 13:16:02,883 DEBUG [RS:1;jenkins-hbase4:42269] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42269,1689426962094 2023-07-15 13:16:02,883 DEBUG [RS:1;jenkins-hbase4:42269] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42269,1689426962094' 2023-07-15 13:16:02,883 DEBUG [RS:1;jenkins-hbase4:42269] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 13:16:02,884 DEBUG [RS:1;jenkins-hbase4:42269] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 13:16:02,884 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740 2023-07-15 13:16:02,884 DEBUG [RS:1;jenkins-hbase4:42269] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 13:16:02,884 INFO [RS:1;jenkins-hbase4:42269] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-15 13:16:02,884 INFO [RS:1;jenkins-hbase4:42269] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-15 13:16:02,884 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740 2023-07-15 13:16:02,886 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-15 13:16:02,887 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-15 13:16:02,891 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:16:02,892 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10814664160, jitterRate=0.007194086909294128}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-15 13:16:02,892 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-15 13:16:02,892 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-15 13:16:02,892 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-15 13:16:02,892 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-15 13:16:02,892 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-15 13:16:02,892 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-15 13:16:02,892 INFO [RS:2;jenkins-hbase4:37449] regionserver.Replication(203): jenkins-hbase4.apache.org,37449,1689426962246 started 2023-07-15 13:16:02,892 INFO [RS:2;jenkins-hbase4:37449] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37449,1689426962246, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37449, sessionid=0x101692028530003 2023-07-15 13:16:02,892 DEBUG [RS:2;jenkins-hbase4:37449] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 13:16:02,892 DEBUG [RS:2;jenkins-hbase4:37449] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:02,892 DEBUG [RS:2;jenkins-hbase4:37449] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37449,1689426962246' 2023-07-15 13:16:02,892 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-15 13:16:02,893 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-15 13:16:02,893 DEBUG [RS:2;jenkins-hbase4:37449] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 13:16:02,893 DEBUG [RS:2;jenkins-hbase4:37449] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 13:16:02,893 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-15 13:16:02,893 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-15 13:16:02,894 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-15 13:16:02,895 DEBUG [RS:2;jenkins-hbase4:37449] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 13:16:02,895 DEBUG [RS:2;jenkins-hbase4:37449] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 13:16:02,895 DEBUG [RS:2;jenkins-hbase4:37449] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:02,895 DEBUG [RS:2;jenkins-hbase4:37449] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37449,1689426962246' 2023-07-15 13:16:02,895 DEBUG [RS:2;jenkins-hbase4:37449] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 13:16:02,895 DEBUG [RS:2;jenkins-hbase4:37449] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 13:16:02,896 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-15 13:16:02,897 DEBUG [RS:2;jenkins-hbase4:37449] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 13:16:02,897 INFO [RS:2;jenkins-hbase4:37449] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-15 13:16:02,897 INFO [RS:2;jenkins-hbase4:37449] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-15 13:16:02,897 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-15 13:16:02,968 INFO [RS:0;jenkins-hbase4:46785] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46785%2C1689426961939, suffix=, logDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/WALs/jenkins-hbase4.apache.org,46785,1689426961939, archiveDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/oldWALs, maxLogs=32 2023-07-15 13:16:02,983 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43853,DS-a6159b29-4eeb-47a8-8382-f70ed04a3e02,DISK] 2023-07-15 13:16:02,984 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41935,DS-359df213-6f3a-4ec6-96b6-a19146c9cae7,DISK] 2023-07-15 13:16:02,984 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37231,DS-44e314fd-6fce-4cae-bb0f-22828eac673a,DISK] 2023-07-15 13:16:02,986 INFO [RS:1;jenkins-hbase4:42269] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42269%2C1689426962094, suffix=, logDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/WALs/jenkins-hbase4.apache.org,42269,1689426962094, archiveDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/oldWALs, maxLogs=32 2023-07-15 13:16:02,986 INFO [RS:0;jenkins-hbase4:46785] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/WALs/jenkins-hbase4.apache.org,46785,1689426961939/jenkins-hbase4.apache.org%2C46785%2C1689426961939.1689426962968 2023-07-15 13:16:02,986 DEBUG [RS:0;jenkins-hbase4:46785] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43853,DS-a6159b29-4eeb-47a8-8382-f70ed04a3e02,DISK], DatanodeInfoWithStorage[127.0.0.1:37231,DS-44e314fd-6fce-4cae-bb0f-22828eac673a,DISK], DatanodeInfoWithStorage[127.0.0.1:41935,DS-359df213-6f3a-4ec6-96b6-a19146c9cae7,DISK]] 2023-07-15 13:16:03,000 INFO [RS:2;jenkins-hbase4:37449] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37449%2C1689426962246, suffix=, logDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/WALs/jenkins-hbase4.apache.org,37449,1689426962246, archiveDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/oldWALs, maxLogs=32 2023-07-15 13:16:03,001 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37231,DS-44e314fd-6fce-4cae-bb0f-22828eac673a,DISK] 2023-07-15 13:16:03,002 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41935,DS-359df213-6f3a-4ec6-96b6-a19146c9cae7,DISK] 2023-07-15 13:16:03,002 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43853,DS-a6159b29-4eeb-47a8-8382-f70ed04a3e02,DISK] 2023-07-15 13:16:03,006 INFO [RS:1;jenkins-hbase4:42269] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/WALs/jenkins-hbase4.apache.org,42269,1689426962094/jenkins-hbase4.apache.org%2C42269%2C1689426962094.1689426962986 2023-07-15 13:16:03,006 DEBUG [RS:1;jenkins-hbase4:42269] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43853,DS-a6159b29-4eeb-47a8-8382-f70ed04a3e02,DISK], DatanodeInfoWithStorage[127.0.0.1:37231,DS-44e314fd-6fce-4cae-bb0f-22828eac673a,DISK], DatanodeInfoWithStorage[127.0.0.1:41935,DS-359df213-6f3a-4ec6-96b6-a19146c9cae7,DISK]] 2023-07-15 13:16:03,019 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37231,DS-44e314fd-6fce-4cae-bb0f-22828eac673a,DISK] 2023-07-15 13:16:03,019 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43853,DS-a6159b29-4eeb-47a8-8382-f70ed04a3e02,DISK] 2023-07-15 13:16:03,019 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41935,DS-359df213-6f3a-4ec6-96b6-a19146c9cae7,DISK] 2023-07-15 13:16:03,023 INFO [RS:2;jenkins-hbase4:37449] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/WALs/jenkins-hbase4.apache.org,37449,1689426962246/jenkins-hbase4.apache.org%2C37449%2C1689426962246.1689426963001 2023-07-15 13:16:03,024 DEBUG [RS:2;jenkins-hbase4:37449] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43853,DS-a6159b29-4eeb-47a8-8382-f70ed04a3e02,DISK], DatanodeInfoWithStorage[127.0.0.1:37231,DS-44e314fd-6fce-4cae-bb0f-22828eac673a,DISK], DatanodeInfoWithStorage[127.0.0.1:41935,DS-359df213-6f3a-4ec6-96b6-a19146c9cae7,DISK]] 2023-07-15 13:16:03,047 DEBUG [jenkins-hbase4:37219] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-15 13:16:03,047 DEBUG [jenkins-hbase4:37219] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:16:03,047 DEBUG [jenkins-hbase4:37219] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:16:03,047 DEBUG [jenkins-hbase4:37219] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:16:03,048 DEBUG [jenkins-hbase4:37219] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:16:03,048 DEBUG [jenkins-hbase4:37219] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:16:03,048 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42269,1689426962094, state=OPENING 2023-07-15 13:16:03,050 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-15 13:16:03,051 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:16:03,051 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42269,1689426962094}] 2023-07-15 13:16:03,051 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 13:16:03,198 WARN [ReadOnlyZKClient-127.0.0.1:62891@0x528e77ff] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-15 13:16:03,198 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37219,1689426961765] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:16:03,200 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55292, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:16:03,200 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42269] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:55292 deadline: 1689427023200, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,42269,1689426962094 2023-07-15 13:16:03,205 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42269,1689426962094 2023-07-15 13:16:03,207 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 13:16:03,208 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55304, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 13:16:03,211 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-15 13:16:03,211 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:16:03,213 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42269%2C1689426962094.meta, suffix=.meta, logDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/WALs/jenkins-hbase4.apache.org,42269,1689426962094, archiveDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/oldWALs, maxLogs=32 2023-07-15 13:16:03,227 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41935,DS-359df213-6f3a-4ec6-96b6-a19146c9cae7,DISK] 2023-07-15 13:16:03,227 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43853,DS-a6159b29-4eeb-47a8-8382-f70ed04a3e02,DISK] 2023-07-15 13:16:03,227 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37231,DS-44e314fd-6fce-4cae-bb0f-22828eac673a,DISK] 2023-07-15 13:16:03,231 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/WALs/jenkins-hbase4.apache.org,42269,1689426962094/jenkins-hbase4.apache.org%2C42269%2C1689426962094.meta.1689426963213.meta 2023-07-15 13:16:03,231 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41935,DS-359df213-6f3a-4ec6-96b6-a19146c9cae7,DISK], DatanodeInfoWithStorage[127.0.0.1:43853,DS-a6159b29-4eeb-47a8-8382-f70ed04a3e02,DISK], DatanodeInfoWithStorage[127.0.0.1:37231,DS-44e314fd-6fce-4cae-bb0f-22828eac673a,DISK]] 2023-07-15 13:16:03,231 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:16:03,231 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-15 13:16:03,231 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-15 13:16:03,231 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-15 13:16:03,231 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-15 13:16:03,231 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:16:03,232 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-15 13:16:03,232 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-15 13:16:03,233 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-15 13:16:03,234 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/info 2023-07-15 13:16:03,234 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/info 2023-07-15 13:16:03,234 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-15 13:16:03,234 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:16:03,235 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-15 13:16:03,235 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/rep_barrier 2023-07-15 13:16:03,235 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/rep_barrier 2023-07-15 13:16:03,236 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-15 13:16:03,236 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:16:03,236 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-15 13:16:03,237 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/table 2023-07-15 13:16:03,237 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/table 2023-07-15 13:16:03,237 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-15 13:16:03,238 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:16:03,239 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740 2023-07-15 13:16:03,239 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740 2023-07-15 13:16:03,241 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-15 13:16:03,242 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-15 13:16:03,243 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10171640800, jitterRate=-0.05269213020801544}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-15 13:16:03,243 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-15 13:16:03,244 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689426963205 2023-07-15 13:16:03,248 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-15 13:16:03,249 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-15 13:16:03,249 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42269,1689426962094, state=OPEN 2023-07-15 13:16:03,253 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-15 13:16:03,253 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 13:16:03,254 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-15 13:16:03,254 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42269,1689426962094 in 202 msec 2023-07-15 13:16:03,256 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-15 13:16:03,256 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 360 msec 2023-07-15 13:16:03,257 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 681 msec 2023-07-15 13:16:03,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689426963257, completionTime=-1 2023-07-15 13:16:03,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-15 13:16:03,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-15 13:16:03,261 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-15 13:16:03,261 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689427023261 2023-07-15 13:16:03,261 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689427083261 2023-07-15 13:16:03,261 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-15 13:16:03,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37219,1689426961765-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:03,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37219,1689426961765-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:03,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37219,1689426961765-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:03,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:37219, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:03,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:03,267 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-15 13:16:03,267 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-15 13:16:03,267 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-15 13:16:03,268 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-15 13:16:03,269 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:16:03,269 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 13:16:03,271 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp/data/hbase/namespace/a79daec580795a4ae5fef038a97752fb 2023-07-15 13:16:03,271 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp/data/hbase/namespace/a79daec580795a4ae5fef038a97752fb empty. 2023-07-15 13:16:03,272 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp/data/hbase/namespace/a79daec580795a4ae5fef038a97752fb 2023-07-15 13:16:03,272 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-15 13:16:03,286 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-15 13:16:03,287 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => a79daec580795a4ae5fef038a97752fb, NAME => 'hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp 2023-07-15 13:16:03,295 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:16:03,295 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing a79daec580795a4ae5fef038a97752fb, disabling compactions & flushes 2023-07-15 13:16:03,295 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb. 2023-07-15 13:16:03,295 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb. 2023-07-15 13:16:03,295 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb. after waiting 0 ms 2023-07-15 13:16:03,295 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb. 2023-07-15 13:16:03,295 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb. 2023-07-15 13:16:03,295 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for a79daec580795a4ae5fef038a97752fb: 2023-07-15 13:16:03,299 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 13:16:03,299 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689426963299"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426963299"}]},"ts":"1689426963299"} 2023-07-15 13:16:03,302 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 13:16:03,302 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 13:16:03,303 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426963303"}]},"ts":"1689426963303"} 2023-07-15 13:16:03,304 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-15 13:16:03,308 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:16:03,308 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:16:03,308 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:16:03,308 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:16:03,308 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:16:03,308 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a79daec580795a4ae5fef038a97752fb, ASSIGN}] 2023-07-15 13:16:03,310 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a79daec580795a4ae5fef038a97752fb, ASSIGN 2023-07-15 13:16:03,311 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=a79daec580795a4ae5fef038a97752fb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37449,1689426962246; forceNewPlan=false, retain=false 2023-07-15 13:16:03,356 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-15 13:16:03,461 INFO [jenkins-hbase4:37219] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 13:16:03,462 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=a79daec580795a4ae5fef038a97752fb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:03,462 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689426963462"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426963462"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426963462"}]},"ts":"1689426963462"} 2023-07-15 13:16:03,468 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure a79daec580795a4ae5fef038a97752fb, server=jenkins-hbase4.apache.org,37449,1689426962246}] 2023-07-15 13:16:03,621 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:03,621 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 13:16:03,622 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34040, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 13:16:03,626 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb. 2023-07-15 13:16:03,626 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a79daec580795a4ae5fef038a97752fb, NAME => 'hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:16:03,626 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace a79daec580795a4ae5fef038a97752fb 2023-07-15 13:16:03,626 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:16:03,626 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a79daec580795a4ae5fef038a97752fb 2023-07-15 13:16:03,626 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a79daec580795a4ae5fef038a97752fb 2023-07-15 13:16:03,627 INFO [StoreOpener-a79daec580795a4ae5fef038a97752fb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region a79daec580795a4ae5fef038a97752fb 2023-07-15 13:16:03,629 DEBUG [StoreOpener-a79daec580795a4ae5fef038a97752fb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/namespace/a79daec580795a4ae5fef038a97752fb/info 2023-07-15 13:16:03,629 DEBUG [StoreOpener-a79daec580795a4ae5fef038a97752fb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/namespace/a79daec580795a4ae5fef038a97752fb/info 2023-07-15 13:16:03,629 INFO [StoreOpener-a79daec580795a4ae5fef038a97752fb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a79daec580795a4ae5fef038a97752fb columnFamilyName info 2023-07-15 13:16:03,630 INFO [StoreOpener-a79daec580795a4ae5fef038a97752fb-1] regionserver.HStore(310): Store=a79daec580795a4ae5fef038a97752fb/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:16:03,630 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/namespace/a79daec580795a4ae5fef038a97752fb 2023-07-15 13:16:03,631 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/namespace/a79daec580795a4ae5fef038a97752fb 2023-07-15 13:16:03,633 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a79daec580795a4ae5fef038a97752fb 2023-07-15 13:16:03,637 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/namespace/a79daec580795a4ae5fef038a97752fb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:16:03,637 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a79daec580795a4ae5fef038a97752fb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10501363040, jitterRate=-0.02198435366153717}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:16:03,637 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a79daec580795a4ae5fef038a97752fb: 2023-07-15 13:16:03,638 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb., pid=6, masterSystemTime=1689426963621 2023-07-15 13:16:03,642 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb. 2023-07-15 13:16:03,643 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb. 2023-07-15 13:16:03,643 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=a79daec580795a4ae5fef038a97752fb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:03,644 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689426963643"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426963643"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426963643"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426963643"}]},"ts":"1689426963643"} 2023-07-15 13:16:03,647 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-15 13:16:03,647 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure a79daec580795a4ae5fef038a97752fb, server=jenkins-hbase4.apache.org,37449,1689426962246 in 177 msec 2023-07-15 13:16:03,648 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-15 13:16:03,649 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=a79daec580795a4ae5fef038a97752fb, ASSIGN in 339 msec 2023-07-15 13:16:03,649 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 13:16:03,649 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426963649"}]},"ts":"1689426963649"} 2023-07-15 13:16:03,651 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-15 13:16:03,653 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 13:16:03,654 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 386 msec 2023-07-15 13:16:03,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-15 13:16:03,670 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-15 13:16:03,670 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:16:03,673 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:16:03,674 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34056, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:16:03,676 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-15 13:16:03,692 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 13:16:03,695 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 18 msec 2023-07-15 13:16:03,698 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-15 13:16:03,704 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37219,1689426961765] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:16:03,705 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37219,1689426961765] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-15 13:16:03,706 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 13:16:03,712 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 14 msec 2023-07-15 13:16:03,713 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:16:03,714 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 13:16:03,716 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp/data/hbase/rsgroup/8298f0dd6b09073959e56ada6670c0fa 2023-07-15 13:16:03,716 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp/data/hbase/rsgroup/8298f0dd6b09073959e56ada6670c0fa empty. 2023-07-15 13:16:03,717 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp/data/hbase/rsgroup/8298f0dd6b09073959e56ada6670c0fa 2023-07-15 13:16:03,717 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-15 13:16:03,723 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-15 13:16:03,726 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-15 13:16:03,726 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.331sec 2023-07-15 13:16:03,731 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-15 13:16:03,731 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-15 13:16:03,731 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-15 13:16:03,731 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37219,1689426961765-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-15 13:16:03,731 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37219,1689426961765-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-15 13:16:03,739 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-15 13:16:03,748 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-15 13:16:03,749 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8298f0dd6b09073959e56ada6670c0fa, NAME => 'hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp 2023-07-15 13:16:03,759 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:16:03,759 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 8298f0dd6b09073959e56ada6670c0fa, disabling compactions & flushes 2023-07-15 13:16:03,759 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa. 2023-07-15 13:16:03,759 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa. 2023-07-15 13:16:03,759 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa. after waiting 0 ms 2023-07-15 13:16:03,759 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa. 2023-07-15 13:16:03,759 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa. 2023-07-15 13:16:03,759 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 8298f0dd6b09073959e56ada6670c0fa: 2023-07-15 13:16:03,761 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 13:16:03,763 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689426963762"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426963762"}]},"ts":"1689426963762"} 2023-07-15 13:16:03,764 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 13:16:03,766 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 13:16:03,766 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426963766"}]},"ts":"1689426963766"} 2023-07-15 13:16:03,767 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-15 13:16:03,771 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:16:03,772 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:16:03,772 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:16:03,772 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:16:03,772 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:16:03,772 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=8298f0dd6b09073959e56ada6670c0fa, ASSIGN}] 2023-07-15 13:16:03,773 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=8298f0dd6b09073959e56ada6670c0fa, ASSIGN 2023-07-15 13:16:03,774 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=8298f0dd6b09073959e56ada6670c0fa, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46785,1689426961939; forceNewPlan=false, retain=false 2023-07-15 13:16:03,784 DEBUG [Listener at localhost/36623] zookeeper.ReadOnlyZKClient(139): Connect 0x018cdb9c to 127.0.0.1:62891 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:16:03,790 DEBUG [Listener at localhost/36623] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@49f97bc0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:16:03,792 DEBUG [hconnection-0x74824f88-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:16:03,793 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55308, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:16:03,795 INFO [Listener at localhost/36623] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,37219,1689426961765 2023-07-15 13:16:03,795 INFO [Listener at localhost/36623] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:16:03,924 INFO [jenkins-hbase4:37219] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 13:16:03,925 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=8298f0dd6b09073959e56ada6670c0fa, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:03,925 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689426963925"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426963925"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426963925"}]},"ts":"1689426963925"} 2023-07-15 13:16:03,927 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 8298f0dd6b09073959e56ada6670c0fa, server=jenkins-hbase4.apache.org,46785,1689426961939}] 2023-07-15 13:16:04,080 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:04,080 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 13:16:04,082 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46890, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 13:16:04,085 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa. 2023-07-15 13:16:04,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8298f0dd6b09073959e56ada6670c0fa, NAME => 'hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:16:04,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-15 13:16:04,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa. service=MultiRowMutationService 2023-07-15 13:16:04,086 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-15 13:16:04,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 8298f0dd6b09073959e56ada6670c0fa 2023-07-15 13:16:04,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:16:04,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8298f0dd6b09073959e56ada6670c0fa 2023-07-15 13:16:04,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8298f0dd6b09073959e56ada6670c0fa 2023-07-15 13:16:04,088 INFO [StoreOpener-8298f0dd6b09073959e56ada6670c0fa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 8298f0dd6b09073959e56ada6670c0fa 2023-07-15 13:16:04,089 DEBUG [StoreOpener-8298f0dd6b09073959e56ada6670c0fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/rsgroup/8298f0dd6b09073959e56ada6670c0fa/m 2023-07-15 13:16:04,089 DEBUG [StoreOpener-8298f0dd6b09073959e56ada6670c0fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/rsgroup/8298f0dd6b09073959e56ada6670c0fa/m 2023-07-15 13:16:04,090 INFO [StoreOpener-8298f0dd6b09073959e56ada6670c0fa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8298f0dd6b09073959e56ada6670c0fa columnFamilyName m 2023-07-15 13:16:04,090 INFO [StoreOpener-8298f0dd6b09073959e56ada6670c0fa-1] regionserver.HStore(310): Store=8298f0dd6b09073959e56ada6670c0fa/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:16:04,091 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/rsgroup/8298f0dd6b09073959e56ada6670c0fa 2023-07-15 13:16:04,091 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/rsgroup/8298f0dd6b09073959e56ada6670c0fa 2023-07-15 13:16:04,094 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8298f0dd6b09073959e56ada6670c0fa 2023-07-15 13:16:04,097 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/rsgroup/8298f0dd6b09073959e56ada6670c0fa/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:16:04,098 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8298f0dd6b09073959e56ada6670c0fa; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@64841602, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:16:04,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8298f0dd6b09073959e56ada6670c0fa: 2023-07-15 13:16:04,099 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa., pid=11, masterSystemTime=1689426964080 2023-07-15 13:16:04,102 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa. 2023-07-15 13:16:04,102 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa. 2023-07-15 13:16:04,103 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=8298f0dd6b09073959e56ada6670c0fa, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:04,103 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689426964103"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426964103"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426964103"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426964103"}]},"ts":"1689426964103"} 2023-07-15 13:16:04,106 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-07-15 13:16:04,106 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 8298f0dd6b09073959e56ada6670c0fa, server=jenkins-hbase4.apache.org,46785,1689426961939 in 177 msec 2023-07-15 13:16:04,107 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-07-15 13:16:04,107 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=8298f0dd6b09073959e56ada6670c0fa, ASSIGN in 334 msec 2023-07-15 13:16:04,108 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 13:16:04,108 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426964108"}]},"ts":"1689426964108"} 2023-07-15 13:16:04,110 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-15 13:16:04,113 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 13:16:04,114 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 409 msec 2023-07-15 13:16:04,208 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37219,1689426961765] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:16:04,209 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46898, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:16:04,211 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37219,1689426961765] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-15 13:16:04,211 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37219,1689426961765] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-15 13:16:04,216 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-15 13:16:04,216 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-15 13:16:04,216 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-15 13:16:04,216 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-15 13:16:04,217 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:16:04,217 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37219,1689426961765] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:04,219 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37219,1689426961765] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-15 13:16:04,220 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37219,1689426961765] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-15 13:16:04,298 DEBUG [Listener at localhost/36623] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-15 13:16:04,300 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32802, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-15 13:16:04,305 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-15 13:16:04,305 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:16:04,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-15 13:16:04,306 DEBUG [Listener at localhost/36623] zookeeper.ReadOnlyZKClient(139): Connect 0x2288e488 to 127.0.0.1:62891 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:16:04,311 DEBUG [Listener at localhost/36623] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4410115e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:16:04,311 INFO [Listener at localhost/36623] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:62891 2023-07-15 13:16:04,315 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 13:16:04,316 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10169202853000a connected 2023-07-15 13:16:04,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:04,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:04,322 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-15 13:16:04,334 INFO [Listener at localhost/36623] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 13:16:04,334 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:16:04,334 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 13:16:04,334 INFO [Listener at localhost/36623] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 13:16:04,334 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 13:16:04,334 INFO [Listener at localhost/36623] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 13:16:04,335 INFO [Listener at localhost/36623] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 13:16:04,335 INFO [Listener at localhost/36623] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39339 2023-07-15 13:16:04,335 INFO [Listener at localhost/36623] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 13:16:04,337 DEBUG [Listener at localhost/36623] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 13:16:04,337 INFO [Listener at localhost/36623] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:16:04,338 INFO [Listener at localhost/36623] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 13:16:04,339 INFO [Listener at localhost/36623] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39339 connecting to ZooKeeper ensemble=127.0.0.1:62891 2023-07-15 13:16:04,344 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:393390x0, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 13:16:04,347 DEBUG [Listener at localhost/36623] zookeeper.ZKUtil(162): regionserver:393390x0, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-15 13:16:04,348 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39339-0x10169202853000b connected 2023-07-15 13:16:04,349 DEBUG [Listener at localhost/36623] zookeeper.ZKUtil(162): regionserver:39339-0x10169202853000b, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-15 13:16:04,349 DEBUG [Listener at localhost/36623] zookeeper.ZKUtil(164): regionserver:39339-0x10169202853000b, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 13:16:04,350 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39339 2023-07-15 13:16:04,350 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39339 2023-07-15 13:16:04,351 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39339 2023-07-15 13:16:04,351 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39339 2023-07-15 13:16:04,351 DEBUG [Listener at localhost/36623] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39339 2023-07-15 13:16:04,353 INFO [Listener at localhost/36623] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 13:16:04,353 INFO [Listener at localhost/36623] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 13:16:04,353 INFO [Listener at localhost/36623] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 13:16:04,354 INFO [Listener at localhost/36623] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 13:16:04,354 INFO [Listener at localhost/36623] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 13:16:04,354 INFO [Listener at localhost/36623] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 13:16:04,354 INFO [Listener at localhost/36623] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 13:16:04,355 INFO [Listener at localhost/36623] http.HttpServer(1146): Jetty bound to port 46293 2023-07-15 13:16:04,355 INFO [Listener at localhost/36623] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 13:16:04,357 INFO [Listener at localhost/36623] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:16:04,357 INFO [Listener at localhost/36623] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@78c1d6c7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/hadoop.log.dir/,AVAILABLE} 2023-07-15 13:16:04,357 INFO [Listener at localhost/36623] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:16:04,358 INFO [Listener at localhost/36623] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1c2ff8e6{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-15 13:16:04,473 INFO [Listener at localhost/36623] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 13:16:04,474 INFO [Listener at localhost/36623] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 13:16:04,474 INFO [Listener at localhost/36623] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 13:16:04,475 INFO [Listener at localhost/36623] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-15 13:16:04,475 INFO [Listener at localhost/36623] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 13:16:04,476 INFO [Listener at localhost/36623] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1bf06b02{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/java.io.tmpdir/jetty-0_0_0_0-46293-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7354195755582755213/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:16:04,477 INFO [Listener at localhost/36623] server.AbstractConnector(333): Started ServerConnector@10f1e12{HTTP/1.1, (http/1.1)}{0.0.0.0:46293} 2023-07-15 13:16:04,478 INFO [Listener at localhost/36623] server.Server(415): Started @46304ms 2023-07-15 13:16:04,480 INFO [RS:3;jenkins-hbase4:39339] regionserver.HRegionServer(951): ClusterId : 168658bf-d77c-40c2-a3cf-b9378aa6360a 2023-07-15 13:16:04,480 DEBUG [RS:3;jenkins-hbase4:39339] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 13:16:04,482 DEBUG [RS:3;jenkins-hbase4:39339] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 13:16:04,482 DEBUG [RS:3;jenkins-hbase4:39339] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 13:16:04,484 DEBUG [RS:3;jenkins-hbase4:39339] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 13:16:04,485 DEBUG [RS:3;jenkins-hbase4:39339] zookeeper.ReadOnlyZKClient(139): Connect 0x35862fbf to 127.0.0.1:62891 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 13:16:04,489 DEBUG [RS:3;jenkins-hbase4:39339] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3b41d77f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 13:16:04,489 DEBUG [RS:3;jenkins-hbase4:39339] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3f7e44cf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 13:16:04,498 DEBUG [RS:3;jenkins-hbase4:39339] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:39339 2023-07-15 13:16:04,498 INFO [RS:3;jenkins-hbase4:39339] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 13:16:04,498 INFO [RS:3;jenkins-hbase4:39339] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 13:16:04,498 DEBUG [RS:3;jenkins-hbase4:39339] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 13:16:04,498 INFO [RS:3;jenkins-hbase4:39339] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37219,1689426961765 with isa=jenkins-hbase4.apache.org/172.31.14.131:39339, startcode=1689426964334 2023-07-15 13:16:04,498 DEBUG [RS:3;jenkins-hbase4:39339] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 13:16:04,501 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39767, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 13:16:04,501 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37219] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39339,1689426964334 2023-07-15 13:16:04,501 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37219,1689426961765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 13:16:04,502 DEBUG [RS:3;jenkins-hbase4:39339] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221 2023-07-15 13:16:04,502 DEBUG [RS:3;jenkins-hbase4:39339] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40589 2023-07-15 13:16:04,502 DEBUG [RS:3;jenkins-hbase4:39339] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=32893 2023-07-15 13:16:04,511 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:16:04,511 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:16:04,511 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:16:04,511 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37219,1689426961765] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:04,511 DEBUG [RS:3;jenkins-hbase4:39339] zookeeper.ZKUtil(162): regionserver:39339-0x10169202853000b, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39339,1689426964334 2023-07-15 13:16:04,511 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:16:04,511 WARN [RS:3;jenkins-hbase4:39339] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 13:16:04,511 INFO [RS:3;jenkins-hbase4:39339] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 13:16:04,511 DEBUG [RS:3;jenkins-hbase4:39339] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/WALs/jenkins-hbase4.apache.org,39339,1689426964334 2023-07-15 13:16:04,511 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39339,1689426964334] 2023-07-15 13:16:04,511 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37219,1689426961765] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-15 13:16:04,512 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42269,1689426962094 2023-07-15 13:16:04,512 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42269,1689426962094 2023-07-15 13:16:04,512 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42269,1689426962094 2023-07-15 13:16:04,516 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:04,516 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:04,516 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:04,516 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37219,1689426961765] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-15 13:16:04,517 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:04,517 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:04,517 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:04,518 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39339,1689426964334 2023-07-15 13:16:04,518 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39339,1689426964334 2023-07-15 13:16:04,518 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39339,1689426964334 2023-07-15 13:16:04,519 DEBUG [RS:3;jenkins-hbase4:39339] zookeeper.ZKUtil(162): regionserver:39339-0x10169202853000b, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42269,1689426962094 2023-07-15 13:16:04,519 DEBUG [RS:3;jenkins-hbase4:39339] zookeeper.ZKUtil(162): regionserver:39339-0x10169202853000b, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:04,520 DEBUG [RS:3;jenkins-hbase4:39339] zookeeper.ZKUtil(162): regionserver:39339-0x10169202853000b, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:04,520 DEBUG [RS:3;jenkins-hbase4:39339] zookeeper.ZKUtil(162): regionserver:39339-0x10169202853000b, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39339,1689426964334 2023-07-15 13:16:04,521 DEBUG [RS:3;jenkins-hbase4:39339] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 13:16:04,521 INFO [RS:3;jenkins-hbase4:39339] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 13:16:04,522 INFO [RS:3;jenkins-hbase4:39339] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 13:16:04,522 INFO [RS:3;jenkins-hbase4:39339] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 13:16:04,522 INFO [RS:3;jenkins-hbase4:39339] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:04,523 INFO [RS:3;jenkins-hbase4:39339] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 13:16:04,524 INFO [RS:3;jenkins-hbase4:39339] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:04,525 DEBUG [RS:3;jenkins-hbase4:39339] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:04,525 DEBUG [RS:3;jenkins-hbase4:39339] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:04,525 DEBUG [RS:3;jenkins-hbase4:39339] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:04,525 DEBUG [RS:3;jenkins-hbase4:39339] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:04,525 DEBUG [RS:3;jenkins-hbase4:39339] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:04,525 DEBUG [RS:3;jenkins-hbase4:39339] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 13:16:04,525 DEBUG [RS:3;jenkins-hbase4:39339] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:04,525 DEBUG [RS:3;jenkins-hbase4:39339] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:04,525 DEBUG [RS:3;jenkins-hbase4:39339] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:04,525 DEBUG [RS:3;jenkins-hbase4:39339] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 13:16:04,526 INFO [RS:3;jenkins-hbase4:39339] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:04,526 INFO [RS:3;jenkins-hbase4:39339] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:04,527 INFO [RS:3;jenkins-hbase4:39339] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:04,540 INFO [RS:3;jenkins-hbase4:39339] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 13:16:04,540 INFO [RS:3;jenkins-hbase4:39339] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39339,1689426964334-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 13:16:04,551 INFO [RS:3;jenkins-hbase4:39339] regionserver.Replication(203): jenkins-hbase4.apache.org,39339,1689426964334 started 2023-07-15 13:16:04,551 INFO [RS:3;jenkins-hbase4:39339] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39339,1689426964334, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39339, sessionid=0x10169202853000b 2023-07-15 13:16:04,551 DEBUG [RS:3;jenkins-hbase4:39339] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 13:16:04,551 DEBUG [RS:3;jenkins-hbase4:39339] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39339,1689426964334 2023-07-15 13:16:04,551 DEBUG [RS:3;jenkins-hbase4:39339] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39339,1689426964334' 2023-07-15 13:16:04,551 DEBUG [RS:3;jenkins-hbase4:39339] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 13:16:04,552 DEBUG [RS:3;jenkins-hbase4:39339] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 13:16:04,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:16:04,552 DEBUG [RS:3;jenkins-hbase4:39339] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 13:16:04,552 DEBUG [RS:3;jenkins-hbase4:39339] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 13:16:04,552 DEBUG [RS:3;jenkins-hbase4:39339] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39339,1689426964334 2023-07-15 13:16:04,552 DEBUG [RS:3;jenkins-hbase4:39339] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39339,1689426964334' 2023-07-15 13:16:04,552 DEBUG [RS:3;jenkins-hbase4:39339] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 13:16:04,552 DEBUG [RS:3;jenkins-hbase4:39339] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 13:16:04,553 DEBUG [RS:3;jenkins-hbase4:39339] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 13:16:04,553 INFO [RS:3;jenkins-hbase4:39339] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-15 13:16:04,553 INFO [RS:3;jenkins-hbase4:39339] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-15 13:16:04,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:04,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:16:04,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:16:04,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:16:04,559 DEBUG [hconnection-0x55e0b7e1-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:16:04,560 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55312, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:16:04,565 DEBUG [hconnection-0x55e0b7e1-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 13:16:04,566 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46900, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 13:16:04,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:04,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:04,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37219] to rsgroup master 2023-07-15 13:16:04,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:16:04,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:32802 deadline: 1689428164571, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. 2023-07-15 13:16:04,571 WARN [Listener at localhost/36623] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:16:04,572 INFO [Listener at localhost/36623] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:16:04,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:04,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:04,573 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37449, jenkins-hbase4.apache.org:39339, jenkins-hbase4.apache.org:42269, jenkins-hbase4.apache.org:46785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:16:04,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:16:04,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:16:04,622 INFO [Listener at localhost/36623] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=562 (was 525) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@2f58764b java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1601831090-2310 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:40589 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-130633532-172.31.14.131-1689426961014:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-130633532-172.31.14.131-1689426961014:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x1eb5301d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1464669480.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 40589 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/36623.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: BP-130633532-172.31.14.131-1689426961014 heartbeating to localhost/127.0.0.1:40589 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-130633532-172.31.14.131-1689426961014:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36623-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/36623-SendThread(127.0.0.1:62891) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server idle connection scanner for port 40589 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 40709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@73db8336 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 40709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 0 on default port 40589 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_148979150_17 at /127.0.0.1:53528 [Receiving block BP-130633532-172.31.14.131-1689426961014:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x018cdb9c-SendThread(127.0.0.1:62891) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x35862fbf sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1464669480.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-28ac0a0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36623-SendThread(127.0.0.1:62891) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp2015795077-2650 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-563-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221-prefix:jenkins-hbase4.apache.org,37449,1689426962246 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data6/current/BP-130633532-172.31.14.131-1689426961014 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x018cdb9c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 0 on default port 37083 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221-prefix:jenkins-hbase4.apache.org,42269,1689426962094.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2af447e3-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:40589 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS:2;jenkins-hbase4:37449-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/36623-SendThread(127.0.0.1:62891) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp100249944-2376 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/78138919.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 40589 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1145745919-2366-acceptor-0@be5d40a-ServerConnector@788676c3{HTTP/1.1, (http/1.1)}{0.0.0.0:40757} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-130633532-172.31.14.131-1689426961014:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2af447e3-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/41271-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase4:46785Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-130633532-172.31.14.131-1689426961014 heartbeating to localhost/127.0.0.1:40589 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:42269 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:34061 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 2 on default port 40709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1145745919-2370 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-130633532-172.31.14.131-1689426961014:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp100249944-2383 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-130633532-172.31.14.131-1689426961014:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_148979150_17 at /127.0.0.1:53500 [Receiving block BP-130633532-172.31.14.131-1689426961014:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36623.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: 1749324024@qtp-511129583-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38085 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp100249944-2381 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp100249944-2378 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/78138919.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:37449 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=42269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@20d1ffbb sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@180e4e43[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp100249944-2377 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/78138919.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x35862fbf-SendThread(127.0.0.1:62891) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_148979150_17 at /127.0.0.1:42262 [Receiving block BP-130633532-172.31.14.131-1689426961014:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp765038518-2337 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 40589 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:34061 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server idle connection scanner for port 37083 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-130633532-172.31.14.131-1689426961014:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 2106140807@qtp-675808917-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41459 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: pool-554-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-721101136_17 at /127.0.0.1:42258 [Receiving block BP-130633532-172.31.14.131-1689426961014:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:40589 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:40589 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44709,1689426956415 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: qtp3775084-2276 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-721101136_17 at /127.0.0.1:53486 [Receiving block BP-130633532-172.31.14.131-1689426961014:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-559-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp3775084-2274 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/78138919.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_148979150_17 at /127.0.0.1:42282 [Receiving block BP-130633532-172.31.14.131-1689426961014:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@423321ce sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-506909356_17 at /127.0.0.1:50530 [Receiving block BP-130633532-172.31.14.131-1689426961014:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x1eb5301d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=42269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2015795077-2643 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/78138919.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 40709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-130633532-172.31.14.131-1689426961014:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-130633532-172.31.14.131-1689426961014:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-638550a8-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=42269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1601831090-2311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x2288e488-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data5/current/BP-130633532-172.31.14.131-1689426961014 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-572-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x2288e488-SendThread(127.0.0.1:62891) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/36623-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@40ae7760 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data3/current/BP-130633532-172.31.14.131-1689426961014 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x55e0b7e1-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 40709 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-506909356_17 at /127.0.0.1:42242 [Receiving block BP-130633532-172.31.14.131-1689426961014:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-506909356_17 at /127.0.0.1:53464 [Receiving block BP-130633532-172.31.14.131-1689426961014:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@5afe4fdc java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2af447e3-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=42269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x2af447e3-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 37083 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp2015795077-2647 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp765038518-2336-acceptor-0@28a767dd-ServerConnector@778f6ca0{HTTP/1.1, (http/1.1)}{0.0.0.0:37465} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1145745919-2369 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x7e813f95-SendThread(127.0.0.1:62891) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp100249944-2379 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/78138919.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:37219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: IPC Server handler 2 on default port 36623 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: CacheReplicationMonitor(1083747620) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:34061 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@20f1d9a6 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-549-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp765038518-2342 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2af447e3-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:34061 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1145745919-2372 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 36623 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1601831090-2306-acceptor-0@4b7c6d8e-ServerConnector@68f3a1b3{HTTP/1.1, (http/1.1)}{0.0.0.0:37321} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62025@0x6b3eff3d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1464669480.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 37083 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2e2f65bd sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 40709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 3 on default port 36623 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1601831090-2312 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:62891 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: qtp100249944-2382 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:34061 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/MasterData-prefix:jenkins-hbase4.apache.org,37219,1689426961765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:34061 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-944370253_17 at /127.0.0.1:53514 [Receiving block BP-130633532-172.31.14.131-1689426961014:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp765038518-2338 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-721101136_17 at /127.0.0.1:50548 [Receiving block BP-130633532-172.31.14.131-1689426961014:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 37083 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1601831090-2305 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/78138919.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@70be6783[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 992686361@qtp-461690674-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33245 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: pool-567-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1145745919-2365 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/78138919.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2015795077-2646 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689426962631 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2af447e3-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@7dfb63db java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 40589 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1145745919-2367 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1601831090-2307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data1/current/BP-130633532-172.31.14.131-1689426961014 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-130633532-172.31.14.131-1689426961014:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 37083 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Session-HouseKeeper-4c0d153b-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x528e77ff-SendThread(127.0.0.1:62891) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data2/current/BP-130633532-172.31.14.131-1689426961014 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x2288e488 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1464669480.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp3775084-2278 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:40589 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_148979150_17 at /127.0.0.1:50568 [Receiving block BP-130633532-172.31.14.131-1689426961014:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37219,1689426961765 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x018cdb9c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1464669480.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp3775084-2280 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-568-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:39339-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@7b268947[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x74824f88-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x4ca1657c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1464669480.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36623-SendThread(127.0.0.1:62891) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x7e813f95 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1464669480.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:40589 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp3775084-2281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-7367d8da-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 36623 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:40589 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp3775084-2279 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:46785-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62025@0x6b3eff3d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 1 on default port 36623 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/36623-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x528e77ff sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1464669480.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:42269Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp765038518-2340 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62025@0x6b3eff3d-SendThread(127.0.0.1:62025) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x1eb5301d-SendThread(127.0.0.1:62891) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1145745919-2368 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-944370253_17 at /127.0.0.1:42270 [Receiving block BP-130633532-172.31.14.131-1689426961014:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221-prefix:jenkins-hbase4.apache.org,42269,1689426962094 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp765038518-2335 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/78138919.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2af447e3-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-130633532-172.31.14.131-1689426961014:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36623-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:34061 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 36623 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: jenkins-hbase4:37449Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@7f5530fa java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46785 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:34061 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:39339Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x4ca1657c-SendThread(127.0.0.1:62891) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-721101136_17 at /127.0.0.1:42206 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@541f6d9b java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:40589 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x2af447e3-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:39339 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36623-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-558-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1729491011@qtp-511129583-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:40589 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:62891): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2015795077-2645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-130633532-172.31.14.131-1689426961014:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 11445193@qtp-1516533373-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45597 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1601831090-2309 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36623-SendThread(127.0.0.1:62891) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: 328770806@qtp-461690674-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp3775084-2277 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1956215790@qtp-1516533373-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp1145745919-2371 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp3775084-2275-acceptor-0@5bbc9bb4-ServerConnector@5e57cc40{HTTP/1.1, (http/1.1)}{0.0.0.0:32893} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1601831090-2308 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2015795077-2644-acceptor-0@263923ea-ServerConnector@10f1e12{HTTP/1.1, (http/1.1)}{0.0.0.0:46293} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689426962635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:0;jenkins-hbase4:46785 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x4ca1657c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-944370253_17 at /127.0.0.1:50560 [Receiving block BP-130633532-172.31.14.131-1689426961014:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp765038518-2341 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x7e813f95-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x55e0b7e1-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data4/current/BP-130633532-172.31.14.131-1689426961014 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_148979150_17 at /127.0.0.1:50558 [Receiving block BP-130633532-172.31.14.131-1689426961014:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x35862fbf-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-130633532-172.31.14.131-1689426961014 heartbeating to localhost/127.0.0.1:40589 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp765038518-2339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1260554388) connection to localhost/127.0.0.1:40589 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@7ceb383c java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2015795077-2649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1458000209@qtp-675808917-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Listener at localhost/36623-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-130633532-172.31.14.131-1689426961014:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-130633532-172.31.14.131-1689426961014:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41271-SendThread(127.0.0.1:62025) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: RS:1;jenkins-hbase4:42269-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36623.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36623-SendThread(127.0.0.1:62891) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp100249944-2380-acceptor-0@7873b656-ServerConnector@b3e4f97{HTTP/1.1, (http/1.1)}{0.0.0.0:40951} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2015795077-2648 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:34061 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36623 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-130633532-172.31.14.131-1689426961014:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/36623.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@6e3a65f9 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221-prefix:jenkins-hbase4.apache.org,46785,1689426961939 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-944370253_17 at /127.0.0.1:50498 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:37219 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-2861bf85-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62891@0x528e77ff-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) - Thread LEAK? -, OpenFileDescriptor=832 (was 823) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=341 (was 329) - SystemLoadAverage LEAK? -, ProcessCount=170 (was 170), AvailableMemoryMB=5029 (was 5236) 2023-07-15 13:16:04,625 WARN [Listener at localhost/36623] hbase.ResourceChecker(130): Thread=562 is superior to 500 2023-07-15 13:16:04,643 INFO [Listener at localhost/36623] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=561, OpenFileDescriptor=832, MaxFileDescriptor=60000, SystemLoadAverage=341, ProcessCount=170, AvailableMemoryMB=5027 2023-07-15 13:16:04,643 WARN [Listener at localhost/36623] hbase.ResourceChecker(130): Thread=561 is superior to 500 2023-07-15 13:16:04,643 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-15 13:16:04,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:04,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:04,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:16:04,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:16:04,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:16:04,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:16:04,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:16:04,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:16:04,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:04,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:16:04,655 INFO [RS:3;jenkins-hbase4:39339] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39339%2C1689426964334, suffix=, logDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/WALs/jenkins-hbase4.apache.org,39339,1689426964334, archiveDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/oldWALs, maxLogs=32 2023-07-15 13:16:04,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:16:04,658 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:16:04,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:16:04,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:04,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:16:04,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:16:04,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:16:04,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:04,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:04,676 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37231,DS-44e314fd-6fce-4cae-bb0f-22828eac673a,DISK] 2023-07-15 13:16:04,676 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43853,DS-a6159b29-4eeb-47a8-8382-f70ed04a3e02,DISK] 2023-07-15 13:16:04,676 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41935,DS-359df213-6f3a-4ec6-96b6-a19146c9cae7,DISK] 2023-07-15 13:16:04,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37219] to rsgroup master 2023-07-15 13:16:04,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:16:04,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:32802 deadline: 1689428164677, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. 2023-07-15 13:16:04,677 WARN [Listener at localhost/36623] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:16:04,679 INFO [Listener at localhost/36623] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:16:04,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:04,680 INFO [RS:3;jenkins-hbase4:39339] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/WALs/jenkins-hbase4.apache.org,39339,1689426964334/jenkins-hbase4.apache.org%2C39339%2C1689426964334.1689426964655 2023-07-15 13:16:04,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:04,680 DEBUG [RS:3;jenkins-hbase4:39339] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37231,DS-44e314fd-6fce-4cae-bb0f-22828eac673a,DISK], DatanodeInfoWithStorage[127.0.0.1:41935,DS-359df213-6f3a-4ec6-96b6-a19146c9cae7,DISK], DatanodeInfoWithStorage[127.0.0.1:43853,DS-a6159b29-4eeb-47a8-8382-f70ed04a3e02,DISK]] 2023-07-15 13:16:04,680 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37449, jenkins-hbase4.apache.org:39339, jenkins-hbase4.apache.org:42269, jenkins-hbase4.apache.org:46785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:16:04,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:16:04,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:16:04,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:16:04,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-15 13:16:04,685 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:16:04,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-15 13:16:04,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-15 13:16:04,687 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:04,688 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:16:04,688 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:16:04,690 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 13:16:04,691 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp/data/default/t1/662ac4fef6735cb45494c18454c9ef7f 2023-07-15 13:16:04,692 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp/data/default/t1/662ac4fef6735cb45494c18454c9ef7f empty. 2023-07-15 13:16:04,692 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp/data/default/t1/662ac4fef6735cb45494c18454c9ef7f 2023-07-15 13:16:04,692 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-15 13:16:04,706 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-15 13:16:04,707 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 662ac4fef6735cb45494c18454c9ef7f, NAME => 't1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp 2023-07-15 13:16:04,718 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:16:04,718 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 662ac4fef6735cb45494c18454c9ef7f, disabling compactions & flushes 2023-07-15 13:16:04,718 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f. 2023-07-15 13:16:04,718 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f. 2023-07-15 13:16:04,718 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f. after waiting 0 ms 2023-07-15 13:16:04,718 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f. 2023-07-15 13:16:04,718 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f. 2023-07-15 13:16:04,718 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 662ac4fef6735cb45494c18454c9ef7f: 2023-07-15 13:16:04,720 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 13:16:04,721 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689426964721"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426964721"}]},"ts":"1689426964721"} 2023-07-15 13:16:04,723 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 13:16:04,723 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 13:16:04,723 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426964723"}]},"ts":"1689426964723"} 2023-07-15 13:16:04,724 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-15 13:16:04,731 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 13:16:04,731 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 13:16:04,731 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 13:16:04,731 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 13:16:04,731 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-15 13:16:04,731 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 13:16:04,731 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=662ac4fef6735cb45494c18454c9ef7f, ASSIGN}] 2023-07-15 13:16:04,732 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=662ac4fef6735cb45494c18454c9ef7f, ASSIGN 2023-07-15 13:16:04,735 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=662ac4fef6735cb45494c18454c9ef7f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37449,1689426962246; forceNewPlan=false, retain=false 2023-07-15 13:16:04,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-15 13:16:04,885 INFO [jenkins-hbase4:37219] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 13:16:04,887 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=662ac4fef6735cb45494c18454c9ef7f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:04,887 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689426964887"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426964887"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426964887"}]},"ts":"1689426964887"} 2023-07-15 13:16:04,888 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 662ac4fef6735cb45494c18454c9ef7f, server=jenkins-hbase4.apache.org,37449,1689426962246}] 2023-07-15 13:16:04,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-15 13:16:05,043 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f. 2023-07-15 13:16:05,043 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 662ac4fef6735cb45494c18454c9ef7f, NAME => 't1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f.', STARTKEY => '', ENDKEY => ''} 2023-07-15 13:16:05,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 662ac4fef6735cb45494c18454c9ef7f 2023-07-15 13:16:05,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 13:16:05,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 662ac4fef6735cb45494c18454c9ef7f 2023-07-15 13:16:05,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 662ac4fef6735cb45494c18454c9ef7f 2023-07-15 13:16:05,045 INFO [StoreOpener-662ac4fef6735cb45494c18454c9ef7f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 662ac4fef6735cb45494c18454c9ef7f 2023-07-15 13:16:05,046 DEBUG [StoreOpener-662ac4fef6735cb45494c18454c9ef7f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/default/t1/662ac4fef6735cb45494c18454c9ef7f/cf1 2023-07-15 13:16:05,046 DEBUG [StoreOpener-662ac4fef6735cb45494c18454c9ef7f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/default/t1/662ac4fef6735cb45494c18454c9ef7f/cf1 2023-07-15 13:16:05,047 INFO [StoreOpener-662ac4fef6735cb45494c18454c9ef7f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 662ac4fef6735cb45494c18454c9ef7f columnFamilyName cf1 2023-07-15 13:16:05,047 INFO [StoreOpener-662ac4fef6735cb45494c18454c9ef7f-1] regionserver.HStore(310): Store=662ac4fef6735cb45494c18454c9ef7f/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 13:16:05,048 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/default/t1/662ac4fef6735cb45494c18454c9ef7f 2023-07-15 13:16:05,048 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/default/t1/662ac4fef6735cb45494c18454c9ef7f 2023-07-15 13:16:05,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 662ac4fef6735cb45494c18454c9ef7f 2023-07-15 13:16:05,053 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/default/t1/662ac4fef6735cb45494c18454c9ef7f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 13:16:05,053 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 662ac4fef6735cb45494c18454c9ef7f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10887961120, jitterRate=0.014020398259162903}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 13:16:05,053 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 662ac4fef6735cb45494c18454c9ef7f: 2023-07-15 13:16:05,054 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f., pid=14, masterSystemTime=1689426965040 2023-07-15 13:16:05,056 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f. 2023-07-15 13:16:05,056 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f. 2023-07-15 13:16:05,056 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=662ac4fef6735cb45494c18454c9ef7f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:05,056 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689426965056"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689426965056"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689426965056"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689426965056"}]},"ts":"1689426965056"} 2023-07-15 13:16:05,059 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-15 13:16:05,059 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 662ac4fef6735cb45494c18454c9ef7f, server=jenkins-hbase4.apache.org,37449,1689426962246 in 169 msec 2023-07-15 13:16:05,060 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-15 13:16:05,060 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=662ac4fef6735cb45494c18454c9ef7f, ASSIGN in 328 msec 2023-07-15 13:16:05,060 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 13:16:05,061 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426965061"}]},"ts":"1689426965061"} 2023-07-15 13:16:05,062 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-15 13:16:05,064 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 13:16:05,065 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 381 msec 2023-07-15 13:16:05,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-15 13:16:05,289 INFO [Listener at localhost/36623] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-15 13:16:05,289 DEBUG [Listener at localhost/36623] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-15 13:16:05,290 INFO [Listener at localhost/36623] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:16:05,292 INFO [Listener at localhost/36623] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-15 13:16:05,292 INFO [Listener at localhost/36623] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:16:05,292 INFO [Listener at localhost/36623] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-15 13:16:05,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 13:16:05,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-15 13:16:05,296 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 13:16:05,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-15 13:16:05,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.14.131:32802 deadline: 1689427025293, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-15 13:16:05,298 INFO [Listener at localhost/36623] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:16:05,299 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-15 13:16:05,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:16:05,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:16:05,400 INFO [Listener at localhost/36623] client.HBaseAdmin$15(890): Started disable of t1 2023-07-15 13:16:05,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-15 13:16:05,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-15 13:16:05,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-15 13:16:05,411 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426965411"}]},"ts":"1689426965411"} 2023-07-15 13:16:05,412 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-15 13:16:05,414 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-15 13:16:05,415 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=662ac4fef6735cb45494c18454c9ef7f, UNASSIGN}] 2023-07-15 13:16:05,415 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=662ac4fef6735cb45494c18454c9ef7f, UNASSIGN 2023-07-15 13:16:05,416 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=662ac4fef6735cb45494c18454c9ef7f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:05,416 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689426965416"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689426965416"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689426965416"}]},"ts":"1689426965416"} 2023-07-15 13:16:05,417 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 662ac4fef6735cb45494c18454c9ef7f, server=jenkins-hbase4.apache.org,37449,1689426962246}] 2023-07-15 13:16:05,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-15 13:16:05,569 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 662ac4fef6735cb45494c18454c9ef7f 2023-07-15 13:16:05,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 662ac4fef6735cb45494c18454c9ef7f, disabling compactions & flushes 2023-07-15 13:16:05,569 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f. 2023-07-15 13:16:05,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f. 2023-07-15 13:16:05,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f. after waiting 0 ms 2023-07-15 13:16:05,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f. 2023-07-15 13:16:05,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/default/t1/662ac4fef6735cb45494c18454c9ef7f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 13:16:05,573 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f. 2023-07-15 13:16:05,574 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 662ac4fef6735cb45494c18454c9ef7f: 2023-07-15 13:16:05,575 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 662ac4fef6735cb45494c18454c9ef7f 2023-07-15 13:16:05,575 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=662ac4fef6735cb45494c18454c9ef7f, regionState=CLOSED 2023-07-15 13:16:05,575 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689426965575"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689426965575"}]},"ts":"1689426965575"} 2023-07-15 13:16:05,578 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-15 13:16:05,578 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 662ac4fef6735cb45494c18454c9ef7f, server=jenkins-hbase4.apache.org,37449,1689426962246 in 159 msec 2023-07-15 13:16:05,579 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-15 13:16:05,579 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=662ac4fef6735cb45494c18454c9ef7f, UNASSIGN in 163 msec 2023-07-15 13:16:05,579 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689426965579"}]},"ts":"1689426965579"} 2023-07-15 13:16:05,580 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-15 13:16:05,583 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-15 13:16:05,584 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 183 msec 2023-07-15 13:16:05,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-15 13:16:05,706 INFO [Listener at localhost/36623] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-15 13:16:05,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-15 13:16:05,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-15 13:16:05,709 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-15 13:16:05,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-15 13:16:05,710 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-15 13:16:05,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:05,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:16:05,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:16:05,713 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp/data/default/t1/662ac4fef6735cb45494c18454c9ef7f 2023-07-15 13:16:05,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-15 13:16:05,715 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp/data/default/t1/662ac4fef6735cb45494c18454c9ef7f/cf1, FileablePath, hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp/data/default/t1/662ac4fef6735cb45494c18454c9ef7f/recovered.edits] 2023-07-15 13:16:05,719 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp/data/default/t1/662ac4fef6735cb45494c18454c9ef7f/recovered.edits/4.seqid to hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/archive/data/default/t1/662ac4fef6735cb45494c18454c9ef7f/recovered.edits/4.seqid 2023-07-15 13:16:05,720 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/.tmp/data/default/t1/662ac4fef6735cb45494c18454c9ef7f 2023-07-15 13:16:05,720 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-15 13:16:05,722 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-15 13:16:05,723 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-15 13:16:05,725 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-15 13:16:05,726 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-15 13:16:05,726 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-15 13:16:05,726 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689426965726"}]},"ts":"9223372036854775807"} 2023-07-15 13:16:05,727 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-15 13:16:05,727 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 662ac4fef6735cb45494c18454c9ef7f, NAME => 't1,,1689426964682.662ac4fef6735cb45494c18454c9ef7f.', STARTKEY => '', ENDKEY => ''}] 2023-07-15 13:16:05,727 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-15 13:16:05,727 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689426965727"}]},"ts":"9223372036854775807"} 2023-07-15 13:16:05,729 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-15 13:16:05,730 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-15 13:16:05,731 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 23 msec 2023-07-15 13:16:05,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-15 13:16:05,815 INFO [Listener at localhost/36623] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-15 13:16:05,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:05,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:05,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:16:05,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:16:05,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:16:05,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:16:05,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:16:05,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:16:05,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:05,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:16:05,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:16:05,831 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:16:05,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:16:05,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:05,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:16:05,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:16:05,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:16:05,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:05,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:05,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37219] to rsgroup master 2023-07-15 13:16:05,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:16:05,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:32802 deadline: 1689428165840, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. 2023-07-15 13:16:05,840 WARN [Listener at localhost/36623] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:16:05,844 INFO [Listener at localhost/36623] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:16:05,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:05,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:05,845 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37449, jenkins-hbase4.apache.org:39339, jenkins-hbase4.apache.org:42269, jenkins-hbase4.apache.org:46785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:16:05,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:16:05,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:16:05,865 INFO [Listener at localhost/36623] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=573 (was 561) - Thread LEAK? -, OpenFileDescriptor=842 (was 832) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=341 (was 341), ProcessCount=170 (was 170), AvailableMemoryMB=5021 (was 5027) 2023-07-15 13:16:05,865 WARN [Listener at localhost/36623] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-15 13:16:05,885 INFO [Listener at localhost/36623] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=573, OpenFileDescriptor=837, MaxFileDescriptor=60000, SystemLoadAverage=341, ProcessCount=170, AvailableMemoryMB=5020 2023-07-15 13:16:05,885 WARN [Listener at localhost/36623] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-15 13:16:05,885 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-15 13:16:05,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:05,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:05,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:16:05,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:16:05,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:16:05,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:16:05,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:16:05,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:16:05,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:05,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:16:05,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:16:05,898 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:16:05,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:16:05,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:05,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:16:05,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:16:05,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:16:05,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:05,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:05,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37219] to rsgroup master 2023-07-15 13:16:05,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:16:05,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:32802 deadline: 1689428165909, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. 2023-07-15 13:16:05,909 WARN [Listener at localhost/36623] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:16:05,911 INFO [Listener at localhost/36623] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:16:05,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:05,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:05,912 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37449, jenkins-hbase4.apache.org:39339, jenkins-hbase4.apache.org:42269, jenkins-hbase4.apache.org:46785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:16:05,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:16:05,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:16:05,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-15 13:16:05,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 13:16:05,915 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-15 13:16:05,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-15 13:16:05,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 13:16:05,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:05,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:05,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:16:05,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:16:05,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:16:05,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:16:05,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:16:05,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:16:05,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:05,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:16:05,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:16:05,936 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:16:05,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:16:05,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:05,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:16:05,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:16:05,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:16:05,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:05,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:05,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37219] to rsgroup master 2023-07-15 13:16:05,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:16:05,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:32802 deadline: 1689428165946, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. 2023-07-15 13:16:05,947 WARN [Listener at localhost/36623] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:16:05,949 INFO [Listener at localhost/36623] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:16:05,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:05,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:05,950 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37449, jenkins-hbase4.apache.org:39339, jenkins-hbase4.apache.org:42269, jenkins-hbase4.apache.org:46785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:16:05,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:16:05,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:16:05,971 INFO [Listener at localhost/36623] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=575 (was 573) - Thread LEAK? -, OpenFileDescriptor=837 (was 837), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=341 (was 341), ProcessCount=170 (was 170), AvailableMemoryMB=5020 (was 5020) 2023-07-15 13:16:05,971 WARN [Listener at localhost/36623] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-15 13:16:05,992 INFO [Listener at localhost/36623] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=575, OpenFileDescriptor=837, MaxFileDescriptor=60000, SystemLoadAverage=341, ProcessCount=170, AvailableMemoryMB=5019 2023-07-15 13:16:05,992 WARN [Listener at localhost/36623] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-15 13:16:05,993 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-15 13:16:05,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:05,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:05,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:16:05,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:16:05,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:16:05,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:16:05,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:16:05,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:16:06,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:06,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:16:06,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:16:06,006 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:16:06,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:16:06,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:06,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:16:06,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:16:06,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:16:06,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:06,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:06,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37219] to rsgroup master 2023-07-15 13:16:06,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:16:06,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:32802 deadline: 1689428166015, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. 2023-07-15 13:16:06,015 WARN [Listener at localhost/36623] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:16:06,017 INFO [Listener at localhost/36623] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:16:06,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:06,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:06,018 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37449, jenkins-hbase4.apache.org:39339, jenkins-hbase4.apache.org:42269, jenkins-hbase4.apache.org:46785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:16:06,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:16:06,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:16:06,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:06,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:06,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:16:06,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:16:06,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:16:06,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:16:06,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:16:06,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:16:06,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:06,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:16:06,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:16:06,033 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:16:06,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:16:06,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:06,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:16:06,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:16:06,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:16:06,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:06,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:06,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37219] to rsgroup master 2023-07-15 13:16:06,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:16:06,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:32802 deadline: 1689428166042, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. 2023-07-15 13:16:06,042 WARN [Listener at localhost/36623] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:16:06,044 INFO [Listener at localhost/36623] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:16:06,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:06,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:06,045 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37449, jenkins-hbase4.apache.org:39339, jenkins-hbase4.apache.org:42269, jenkins-hbase4.apache.org:46785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:16:06,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:16:06,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:16:06,064 INFO [Listener at localhost/36623] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=576 (was 575) - Thread LEAK? -, OpenFileDescriptor=837 (was 837), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=341 (was 341), ProcessCount=170 (was 170), AvailableMemoryMB=5019 (was 5019) 2023-07-15 13:16:06,064 WARN [Listener at localhost/36623] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-15 13:16:06,083 INFO [Listener at localhost/36623] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576, OpenFileDescriptor=837, MaxFileDescriptor=60000, SystemLoadAverage=341, ProcessCount=170, AvailableMemoryMB=5018 2023-07-15 13:16:06,084 WARN [Listener at localhost/36623] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-15 13:16:06,084 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-15 13:16:06,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:06,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:06,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:16:06,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:16:06,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:16:06,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:16:06,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:16:06,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:16:06,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:06,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:16:06,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:16:06,096 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:16:06,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:16:06,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:06,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:16:06,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:16:06,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:16:06,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:06,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:06,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37219] to rsgroup master 2023-07-15 13:16:06,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:16:06,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:32802 deadline: 1689428166108, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. 2023-07-15 13:16:06,109 WARN [Listener at localhost/36623] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:16:06,110 INFO [Listener at localhost/36623] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:16:06,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:06,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:06,111 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37449, jenkins-hbase4.apache.org:39339, jenkins-hbase4.apache.org:42269, jenkins-hbase4.apache.org:46785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:16:06,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:16:06,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:16:06,112 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-15 13:16:06,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-15 13:16:06,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-15 13:16:06,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:06,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:16:06,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 13:16:06,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:16:06,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:06,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:06,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-15 13:16:06,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-15 13:16:06,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-15 13:16:06,136 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 13:16:06,139 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 15 msec 2023-07-15 13:16:06,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-15 13:16:06,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-15 13:16:06,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:16:06,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:32802 deadline: 1689428166229, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-15 13:16:06,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-15 13:16:06,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-15 13:16:06,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-15 13:16:06,250 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-15 13:16:06,251 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 14 msec 2023-07-15 13:16:06,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-15 13:16:06,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-15 13:16:06,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-15 13:16:06,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:06,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-15 13:16:06,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:16:06,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 13:16:06,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:16:06,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:06,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:06,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-15 13:16:06,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-15 13:16:06,364 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-15 13:16:06,366 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-15 13:16:06,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-15 13:16:06,367 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-15 13:16:06,368 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-15 13:16:06,369 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 13:16:06,369 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-15 13:16:06,371 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-15 13:16:06,371 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-15 13:16:06,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-15 13:16:06,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-15 13:16:06,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-15 13:16:06,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:06,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:16:06,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-15 13:16:06,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:16:06,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:06,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:06,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:16:06,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:32802 deadline: 1689427026478, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-15 13:16:06,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:06,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:06,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:16:06,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:16:06,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:16:06,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:16:06,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:16:06,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-15 13:16:06,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:06,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:16:06,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-15 13:16:06,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:16:06,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 13:16:06,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 13:16:06,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 13:16:06,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 13:16:06,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 13:16:06,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 13:16:06,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:06,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 13:16:06,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 13:16:06,495 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 13:16:06,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 13:16:06,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 13:16:06,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 13:16:06,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 13:16:06,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 13:16:06,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:06,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:06,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37219] to rsgroup master 2023-07-15 13:16:06,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 13:16:06,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:32802 deadline: 1689428166504, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. 2023-07-15 13:16:06,504 WARN [Listener at localhost/36623] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37219 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 13:16:06,506 INFO [Listener at localhost/36623] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 13:16:06,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 13:16:06,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 13:16:06,507 INFO [Listener at localhost/36623] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37449, jenkins-hbase4.apache.org:39339, jenkins-hbase4.apache.org:42269, jenkins-hbase4.apache.org:46785], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 13:16:06,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 13:16:06,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37219] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 13:16:06,525 INFO [Listener at localhost/36623] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576 (was 576), OpenFileDescriptor=837 (was 837), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=341 (was 341), ProcessCount=170 (was 170), AvailableMemoryMB=5024 (was 5018) - AvailableMemoryMB LEAK? - 2023-07-15 13:16:06,525 WARN [Listener at localhost/36623] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-15 13:16:06,525 INFO [Listener at localhost/36623] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-15 13:16:06,525 INFO [Listener at localhost/36623] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-15 13:16:06,525 DEBUG [Listener at localhost/36623] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x018cdb9c to 127.0.0.1:62891 2023-07-15 13:16:06,525 DEBUG [Listener at localhost/36623] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:16:06,525 DEBUG [Listener at localhost/36623] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-15 13:16:06,525 DEBUG [Listener at localhost/36623] util.JVMClusterUtil(257): Found active master hash=2029530058, stopped=false 2023-07-15 13:16:06,525 DEBUG [Listener at localhost/36623] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-15 13:16:06,526 DEBUG [Listener at localhost/36623] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-15 13:16:06,526 INFO [Listener at localhost/36623] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,37219,1689426961765 2023-07-15 13:16:06,527 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:39339-0x10169202853000b, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 13:16:06,527 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 13:16:06,527 INFO [Listener at localhost/36623] procedure2.ProcedureExecutor(629): Stopping 2023-07-15 13:16:06,527 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 13:16:06,527 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 13:16:06,528 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:16:06,527 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 13:16:06,528 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:16:06,528 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39339-0x10169202853000b, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:16:06,528 DEBUG [Listener at localhost/36623] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x528e77ff to 127.0.0.1:62891 2023-07-15 13:16:06,528 DEBUG [Listener at localhost/36623] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:16:06,528 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:16:06,528 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:16:06,529 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 13:16:06,529 INFO [Listener at localhost/36623] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46785,1689426961939' ***** 2023-07-15 13:16:06,529 INFO [Listener at localhost/36623] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 13:16:06,529 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 13:16:06,529 INFO [Listener at localhost/36623] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42269,1689426962094' ***** 2023-07-15 13:16:06,533 INFO [Listener at localhost/36623] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 13:16:06,533 INFO [Listener at localhost/36623] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37449,1689426962246' ***** 2023-07-15 13:16:06,533 INFO [Listener at localhost/36623] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 13:16:06,533 INFO [RS:1;jenkins-hbase4:42269] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 13:16:06,533 INFO [RS:2;jenkins-hbase4:37449] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 13:16:06,533 INFO [Listener at localhost/36623] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39339,1689426964334' ***** 2023-07-15 13:16:06,534 INFO [Listener at localhost/36623] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 13:16:06,534 INFO [RS:3;jenkins-hbase4:39339] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 13:16:06,534 INFO [RS:0;jenkins-hbase4:46785] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6fba47ee{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:16:06,537 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-15 13:16:06,537 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-15 13:16:06,537 INFO [RS:0;jenkins-hbase4:46785] server.AbstractConnector(383): Stopped ServerConnector@68f3a1b3{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 13:16:06,538 INFO [RS:0;jenkins-hbase4:46785] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 13:16:06,538 INFO [RS:1;jenkins-hbase4:42269] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@c8c2101{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:16:06,538 INFO [RS:2;jenkins-hbase4:37449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4d9e1b0c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:16:06,539 INFO [RS:1;jenkins-hbase4:42269] server.AbstractConnector(383): Stopped ServerConnector@778f6ca0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 13:16:06,540 INFO [RS:0;jenkins-hbase4:46785] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4e2c1e18{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-15 13:16:06,540 INFO [RS:1;jenkins-hbase4:42269] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 13:16:06,540 INFO [RS:2;jenkins-hbase4:37449] server.AbstractConnector(383): Stopped ServerConnector@788676c3{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 13:16:06,541 INFO [RS:0;jenkins-hbase4:46785] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6f0b8925{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/hadoop.log.dir/,STOPPED} 2023-07-15 13:16:06,541 INFO [RS:1;jenkins-hbase4:42269] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@b79c49e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-15 13:16:06,541 INFO [RS:2;jenkins-hbase4:37449] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 13:16:06,542 INFO [RS:1;jenkins-hbase4:42269] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@c284dc1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/hadoop.log.dir/,STOPPED} 2023-07-15 13:16:06,543 INFO [RS:2;jenkins-hbase4:37449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@33138f5e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-15 13:16:06,542 INFO [RS:3;jenkins-hbase4:39339] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1bf06b02{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-15 13:16:06,544 INFO [RS:2;jenkins-hbase4:37449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4ef56e43{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/hadoop.log.dir/,STOPPED} 2023-07-15 13:16:06,544 INFO [RS:3;jenkins-hbase4:39339] server.AbstractConnector(383): Stopped ServerConnector@10f1e12{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 13:16:06,544 INFO [RS:3;jenkins-hbase4:39339] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 13:16:06,545 INFO [RS:1;jenkins-hbase4:42269] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 13:16:06,544 INFO [RS:0;jenkins-hbase4:46785] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 13:16:06,545 INFO [RS:3;jenkins-hbase4:39339] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1c2ff8e6{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-15 13:16:06,546 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 13:16:06,546 INFO [RS:0;jenkins-hbase4:46785] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 13:16:06,547 INFO [RS:2;jenkins-hbase4:37449] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 13:16:06,547 INFO [RS:1;jenkins-hbase4:42269] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 13:16:06,547 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 13:16:06,547 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 13:16:06,547 INFO [RS:3;jenkins-hbase4:39339] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@78c1d6c7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/hadoop.log.dir/,STOPPED} 2023-07-15 13:16:06,547 INFO [RS:1;jenkins-hbase4:42269] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 13:16:06,547 INFO [RS:2;jenkins-hbase4:37449] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 13:16:06,547 INFO [RS:2;jenkins-hbase4:37449] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 13:16:06,547 INFO [RS:2;jenkins-hbase4:37449] regionserver.HRegionServer(3305): Received CLOSE for a79daec580795a4ae5fef038a97752fb 2023-07-15 13:16:06,547 INFO [RS:3;jenkins-hbase4:39339] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 13:16:06,548 INFO [RS:2;jenkins-hbase4:37449] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:06,548 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 13:16:06,548 DEBUG [RS:2;jenkins-hbase4:37449] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1eb5301d to 127.0.0.1:62891 2023-07-15 13:16:06,548 DEBUG [RS:2;jenkins-hbase4:37449] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:16:06,548 INFO [RS:2;jenkins-hbase4:37449] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-15 13:16:06,548 DEBUG [RS:2;jenkins-hbase4:37449] regionserver.HRegionServer(1478): Online Regions={a79daec580795a4ae5fef038a97752fb=hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb.} 2023-07-15 13:16:06,548 DEBUG [RS:2;jenkins-hbase4:37449] regionserver.HRegionServer(1504): Waiting on a79daec580795a4ae5fef038a97752fb 2023-07-15 13:16:06,547 INFO [RS:0;jenkins-hbase4:46785] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 13:16:06,548 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(3305): Received CLOSE for 8298f0dd6b09073959e56ada6670c0fa 2023-07-15 13:16:06,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a79daec580795a4ae5fef038a97752fb, disabling compactions & flushes 2023-07-15 13:16:06,548 INFO [RS:3;jenkins-hbase4:39339] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 13:16:06,547 INFO [RS:1;jenkins-hbase4:42269] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42269,1689426962094 2023-07-15 13:16:06,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8298f0dd6b09073959e56ada6670c0fa, disabling compactions & flushes 2023-07-15 13:16:06,548 INFO [RS:3;jenkins-hbase4:39339] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 13:16:06,548 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb. 2023-07-15 13:16:06,548 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:06,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb. 2023-07-15 13:16:06,549 INFO [RS:3;jenkins-hbase4:39339] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39339,1689426964334 2023-07-15 13:16:06,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa. 2023-07-15 13:16:06,548 DEBUG [RS:1;jenkins-hbase4:42269] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7e813f95 to 127.0.0.1:62891 2023-07-15 13:16:06,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa. 2023-07-15 13:16:06,549 DEBUG [RS:3;jenkins-hbase4:39339] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x35862fbf to 127.0.0.1:62891 2023-07-15 13:16:06,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb. after waiting 0 ms 2023-07-15 13:16:06,549 DEBUG [RS:0;jenkins-hbase4:46785] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4ca1657c to 127.0.0.1:62891 2023-07-15 13:16:06,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb. 2023-07-15 13:16:06,549 DEBUG [RS:3;jenkins-hbase4:39339] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:16:06,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing a79daec580795a4ae5fef038a97752fb 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-15 13:16:06,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa. after waiting 0 ms 2023-07-15 13:16:06,549 DEBUG [RS:1;jenkins-hbase4:42269] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:16:06,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa. 2023-07-15 13:16:06,549 INFO [RS:3;jenkins-hbase4:39339] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39339,1689426964334; all regions closed. 2023-07-15 13:16:06,549 DEBUG [RS:0;jenkins-hbase4:46785] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:16:06,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 8298f0dd6b09073959e56ada6670c0fa 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-15 13:16:06,549 INFO [RS:1;jenkins-hbase4:42269] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 13:16:06,549 INFO [RS:1;jenkins-hbase4:42269] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 13:16:06,549 INFO [RS:1;jenkins-hbase4:42269] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 13:16:06,549 INFO [RS:1;jenkins-hbase4:42269] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-15 13:16:06,549 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-15 13:16:06,550 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1478): Online Regions={8298f0dd6b09073959e56ada6670c0fa=hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa.} 2023-07-15 13:16:06,550 DEBUG [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1504): Waiting on 8298f0dd6b09073959e56ada6670c0fa 2023-07-15 13:16:06,555 INFO [RS:1;jenkins-hbase4:42269] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-15 13:16:06,555 DEBUG [RS:1;jenkins-hbase4:42269] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-15 13:16:06,555 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-15 13:16:06,555 DEBUG [RS:1;jenkins-hbase4:42269] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-15 13:16:06,555 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-15 13:16:06,555 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-15 13:16:06,555 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-15 13:16:06,555 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-15 13:16:06,555 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-15 13:16:06,560 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/WALs/jenkins-hbase4.apache.org,39339,1689426964334/jenkins-hbase4.apache.org%2C39339%2C1689426964334.1689426964655 not finished, retry = 0 2023-07-15 13:16:06,569 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:16:06,569 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:16:06,571 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/rsgroup/8298f0dd6b09073959e56ada6670c0fa/.tmp/m/3132e3323a224635959bfaf6379a66d1 2023-07-15 13:16:06,578 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3132e3323a224635959bfaf6379a66d1 2023-07-15 13:16:06,578 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/rsgroup/8298f0dd6b09073959e56ada6670c0fa/.tmp/m/3132e3323a224635959bfaf6379a66d1 as hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/rsgroup/8298f0dd6b09073959e56ada6670c0fa/m/3132e3323a224635959bfaf6379a66d1 2023-07-15 13:16:06,581 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/namespace/a79daec580795a4ae5fef038a97752fb/.tmp/info/f97374d040834c2da93e31e06dc3a839 2023-07-15 13:16:06,585 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/.tmp/info/575bbc81ebdb47338f34266b443539f7 2023-07-15 13:16:06,586 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3132e3323a224635959bfaf6379a66d1 2023-07-15 13:16:06,586 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/rsgroup/8298f0dd6b09073959e56ada6670c0fa/m/3132e3323a224635959bfaf6379a66d1, entries=12, sequenceid=29, filesize=5.4 K 2023-07-15 13:16:06,587 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f97374d040834c2da93e31e06dc3a839 2023-07-15 13:16:06,587 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 8298f0dd6b09073959e56ada6670c0fa in 38ms, sequenceid=29, compaction requested=false 2023-07-15 13:16:06,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/namespace/a79daec580795a4ae5fef038a97752fb/.tmp/info/f97374d040834c2da93e31e06dc3a839 as hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/namespace/a79daec580795a4ae5fef038a97752fb/info/f97374d040834c2da93e31e06dc3a839 2023-07-15 13:16:06,591 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:16:06,593 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 575bbc81ebdb47338f34266b443539f7 2023-07-15 13:16:06,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f97374d040834c2da93e31e06dc3a839 2023-07-15 13:16:06,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/namespace/a79daec580795a4ae5fef038a97752fb/info/f97374d040834c2da93e31e06dc3a839, entries=3, sequenceid=9, filesize=5.0 K 2023-07-15 13:16:06,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for a79daec580795a4ae5fef038a97752fb in 48ms, sequenceid=9, compaction requested=false 2023-07-15 13:16:06,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/rsgroup/8298f0dd6b09073959e56ada6670c0fa/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-15 13:16:06,607 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 13:16:06,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa. 2023-07-15 13:16:06,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8298f0dd6b09073959e56ada6670c0fa: 2023-07-15 13:16:06,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689426963704.8298f0dd6b09073959e56ada6670c0fa. 2023-07-15 13:16:06,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/namespace/a79daec580795a4ae5fef038a97752fb/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-15 13:16:06,616 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb. 2023-07-15 13:16:06,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a79daec580795a4ae5fef038a97752fb: 2023-07-15 13:16:06,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689426963267.a79daec580795a4ae5fef038a97752fb. 2023-07-15 13:16:06,619 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/.tmp/rep_barrier/d8ad34de663a40fdb28aa67b93e3a826 2023-07-15 13:16:06,625 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d8ad34de663a40fdb28aa67b93e3a826 2023-07-15 13:16:06,633 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:16:06,635 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/.tmp/table/3ddbe2cd128046488a6f25c51af0a105 2023-07-15 13:16:06,640 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3ddbe2cd128046488a6f25c51af0a105 2023-07-15 13:16:06,641 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/.tmp/info/575bbc81ebdb47338f34266b443539f7 as hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/info/575bbc81ebdb47338f34266b443539f7 2023-07-15 13:16:06,646 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 575bbc81ebdb47338f34266b443539f7 2023-07-15 13:16:06,646 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/info/575bbc81ebdb47338f34266b443539f7, entries=22, sequenceid=26, filesize=7.3 K 2023-07-15 13:16:06,647 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/.tmp/rep_barrier/d8ad34de663a40fdb28aa67b93e3a826 as hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/rep_barrier/d8ad34de663a40fdb28aa67b93e3a826 2023-07-15 13:16:06,651 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d8ad34de663a40fdb28aa67b93e3a826 2023-07-15 13:16:06,651 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/rep_barrier/d8ad34de663a40fdb28aa67b93e3a826, entries=1, sequenceid=26, filesize=4.9 K 2023-07-15 13:16:06,652 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/.tmp/table/3ddbe2cd128046488a6f25c51af0a105 as hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/table/3ddbe2cd128046488a6f25c51af0a105 2023-07-15 13:16:06,656 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3ddbe2cd128046488a6f25c51af0a105 2023-07-15 13:16:06,656 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/table/3ddbe2cd128046488a6f25c51af0a105, entries=6, sequenceid=26, filesize=5.1 K 2023-07-15 13:16:06,657 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 102ms, sequenceid=26, compaction requested=false 2023-07-15 13:16:06,657 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-15 13:16:06,668 DEBUG [RS:3;jenkins-hbase4:39339] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/oldWALs 2023-07-15 13:16:06,668 INFO [RS:3;jenkins-hbase4:39339] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39339%2C1689426964334:(num 1689426964655) 2023-07-15 13:16:06,668 DEBUG [RS:3;jenkins-hbase4:39339] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:16:06,668 INFO [RS:3;jenkins-hbase4:39339] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:16:06,669 INFO [RS:3;jenkins-hbase4:39339] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-15 13:16:06,669 INFO [RS:3;jenkins-hbase4:39339] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 13:16:06,669 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 13:16:06,669 INFO [RS:3;jenkins-hbase4:39339] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 13:16:06,669 INFO [RS:3;jenkins-hbase4:39339] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 13:16:06,671 INFO [RS:3;jenkins-hbase4:39339] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39339 2023-07-15 13:16:06,674 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-15 13:16:06,674 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 13:16:06,675 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-15 13:16:06,675 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-15 13:16:06,676 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-15 13:16:06,691 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39339,1689426964334 2023-07-15 13:16:06,691 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:39339-0x10169202853000b, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39339,1689426964334 2023-07-15 13:16:06,691 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39339,1689426964334 2023-07-15 13:16:06,691 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:16:06,691 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:16:06,691 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:39339-0x10169202853000b, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:16:06,691 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:16:06,694 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39339,1689426964334 2023-07-15 13:16:06,694 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:16:06,695 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39339,1689426964334] 2023-07-15 13:16:06,695 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39339,1689426964334; numProcessing=1 2023-07-15 13:16:06,697 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39339,1689426964334 already deleted, retry=false 2023-07-15 13:16:06,697 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39339,1689426964334 expired; onlineServers=3 2023-07-15 13:16:06,748 INFO [RS:2;jenkins-hbase4:37449] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37449,1689426962246; all regions closed. 2023-07-15 13:16:06,749 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-15 13:16:06,750 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-15 13:16:06,750 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46785,1689426961939; all regions closed. 2023-07-15 13:16:06,755 INFO [RS:1;jenkins-hbase4:42269] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42269,1689426962094; all regions closed. 2023-07-15 13:16:06,755 DEBUG [RS:2;jenkins-hbase4:37449] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/oldWALs 2023-07-15 13:16:06,756 INFO [RS:2;jenkins-hbase4:37449] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37449%2C1689426962246:(num 1689426963001) 2023-07-15 13:16:06,756 DEBUG [RS:2;jenkins-hbase4:37449] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:16:06,756 INFO [RS:2;jenkins-hbase4:37449] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:16:06,759 INFO [RS:2;jenkins-hbase4:37449] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-15 13:16:06,759 INFO [RS:2;jenkins-hbase4:37449] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 13:16:06,759 INFO [RS:2;jenkins-hbase4:37449] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 13:16:06,759 INFO [RS:2;jenkins-hbase4:37449] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 13:16:06,759 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 13:16:06,767 INFO [RS:2;jenkins-hbase4:37449] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37449 2023-07-15 13:16:06,769 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:06,769 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:16:06,769 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:06,769 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37449,1689426962246 2023-07-15 13:16:06,772 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37449,1689426962246] 2023-07-15 13:16:06,772 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37449,1689426962246; numProcessing=2 2023-07-15 13:16:06,773 DEBUG [RS:0;jenkins-hbase4:46785] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/oldWALs 2023-07-15 13:16:06,773 INFO [RS:0;jenkins-hbase4:46785] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46785%2C1689426961939:(num 1689426962968) 2023-07-15 13:16:06,773 DEBUG [RS:0;jenkins-hbase4:46785] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:16:06,774 INFO [RS:0;jenkins-hbase4:46785] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:16:06,774 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37449,1689426962246 already deleted, retry=false 2023-07-15 13:16:06,774 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/WALs/jenkins-hbase4.apache.org,42269,1689426962094/jenkins-hbase4.apache.org%2C42269%2C1689426962094.meta.1689426963213.meta not finished, retry = 0 2023-07-15 13:16:06,775 INFO [RS:0;jenkins-hbase4:46785] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-15 13:16:06,775 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37449,1689426962246 expired; onlineServers=2 2023-07-15 13:16:06,775 INFO [RS:0;jenkins-hbase4:46785] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 13:16:06,775 INFO [RS:0;jenkins-hbase4:46785] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 13:16:06,775 INFO [RS:0;jenkins-hbase4:46785] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 13:16:06,775 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 13:16:06,776 INFO [RS:0;jenkins-hbase4:46785] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46785 2023-07-15 13:16:06,779 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:06,779 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46785,1689426961939 2023-07-15 13:16:06,779 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:16:06,780 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46785,1689426961939] 2023-07-15 13:16:06,780 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46785,1689426961939; numProcessing=3 2023-07-15 13:16:06,781 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46785,1689426961939 already deleted, retry=false 2023-07-15 13:16:06,781 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46785,1689426961939 expired; onlineServers=1 2023-07-15 13:16:06,783 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-15 13:16:06,783 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-15 13:16:06,878 DEBUG [RS:1;jenkins-hbase4:42269] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/oldWALs 2023-07-15 13:16:06,878 INFO [RS:1;jenkins-hbase4:42269] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42269%2C1689426962094.meta:.meta(num 1689426963213) 2023-07-15 13:16:06,886 DEBUG [RS:1;jenkins-hbase4:42269] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/oldWALs 2023-07-15 13:16:06,886 INFO [RS:1;jenkins-hbase4:42269] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42269%2C1689426962094:(num 1689426962986) 2023-07-15 13:16:06,886 DEBUG [RS:1;jenkins-hbase4:42269] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:16:06,886 INFO [RS:1;jenkins-hbase4:42269] regionserver.LeaseManager(133): Closed leases 2023-07-15 13:16:06,886 INFO [RS:1;jenkins-hbase4:42269] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-15 13:16:06,887 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 13:16:06,888 INFO [RS:1;jenkins-hbase4:42269] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42269 2023-07-15 13:16:06,890 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 13:16:06,890 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42269,1689426962094 2023-07-15 13:16:06,891 ERROR [Listener at localhost/36623-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@55400723 rejected from java.util.concurrent.ThreadPoolExecutor@517f8953[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 9] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1374) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-07-15 13:16:06,891 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42269,1689426962094] 2023-07-15 13:16:06,891 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42269,1689426962094; numProcessing=4 2023-07-15 13:16:06,892 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42269,1689426962094 already deleted, retry=false 2023-07-15 13:16:06,893 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42269,1689426962094 expired; onlineServers=0 2023-07-15 13:16:06,893 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37219,1689426961765' ***** 2023-07-15 13:16:06,893 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-15 13:16:06,894 DEBUG [M:0;jenkins-hbase4:37219] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7cc7b00d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 13:16:06,894 INFO [M:0;jenkins-hbase4:37219] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 13:16:06,896 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-15 13:16:06,896 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 13:16:06,896 INFO [M:0;jenkins-hbase4:37219] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@15638a64{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-15 13:16:06,896 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 13:16:06,897 INFO [M:0;jenkins-hbase4:37219] server.AbstractConnector(383): Stopped ServerConnector@5e57cc40{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 13:16:06,897 INFO [M:0;jenkins-hbase4:37219] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 13:16:06,897 INFO [M:0;jenkins-hbase4:37219] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@f6563c8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-15 13:16:06,898 INFO [M:0;jenkins-hbase4:37219] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@592e1b3c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/hadoop.log.dir/,STOPPED} 2023-07-15 13:16:06,898 INFO [M:0;jenkins-hbase4:37219] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37219,1689426961765 2023-07-15 13:16:06,898 INFO [M:0;jenkins-hbase4:37219] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37219,1689426961765; all regions closed. 2023-07-15 13:16:06,898 DEBUG [M:0;jenkins-hbase4:37219] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 13:16:06,898 INFO [M:0;jenkins-hbase4:37219] master.HMaster(1491): Stopping master jetty server 2023-07-15 13:16:06,899 INFO [M:0;jenkins-hbase4:37219] server.AbstractConnector(383): Stopped ServerConnector@b3e4f97{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 13:16:06,899 DEBUG [M:0;jenkins-hbase4:37219] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-15 13:16:06,900 DEBUG [M:0;jenkins-hbase4:37219] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-15 13:16:06,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689426962631] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689426962631,5,FailOnTimeoutGroup] 2023-07-15 13:16:06,900 INFO [M:0;jenkins-hbase4:37219] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-15 13:16:06,900 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-15 13:16:06,900 INFO [M:0;jenkins-hbase4:37219] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-15 13:16:06,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689426962635] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689426962635,5,FailOnTimeoutGroup] 2023-07-15 13:16:06,900 INFO [M:0;jenkins-hbase4:37219] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-15 13:16:06,900 DEBUG [M:0;jenkins-hbase4:37219] master.HMaster(1512): Stopping service threads 2023-07-15 13:16:06,900 INFO [M:0;jenkins-hbase4:37219] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-15 13:16:06,900 ERROR [M:0;jenkins-hbase4:37219] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-15 13:16:06,900 INFO [M:0;jenkins-hbase4:37219] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-15 13:16:06,901 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-15 13:16:06,901 DEBUG [M:0;jenkins-hbase4:37219] zookeeper.ZKUtil(398): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-15 13:16:06,901 WARN [M:0;jenkins-hbase4:37219] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-15 13:16:06,901 INFO [M:0;jenkins-hbase4:37219] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-15 13:16:06,901 INFO [M:0;jenkins-hbase4:37219] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-15 13:16:06,901 DEBUG [M:0;jenkins-hbase4:37219] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-15 13:16:06,901 INFO [M:0;jenkins-hbase4:37219] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:16:06,901 DEBUG [M:0;jenkins-hbase4:37219] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:16:06,901 DEBUG [M:0;jenkins-hbase4:37219] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-15 13:16:06,901 DEBUG [M:0;jenkins-hbase4:37219] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:16:06,902 INFO [M:0;jenkins-hbase4:37219] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.22 KB heapSize=90.66 KB 2023-07-15 13:16:06,915 INFO [M:0;jenkins-hbase4:37219] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.22 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/ae70c25d386a48169774830392cce23b 2023-07-15 13:16:06,920 DEBUG [M:0;jenkins-hbase4:37219] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/ae70c25d386a48169774830392cce23b as hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/ae70c25d386a48169774830392cce23b 2023-07-15 13:16:06,925 INFO [M:0;jenkins-hbase4:37219] regionserver.HStore(1080): Added hdfs://localhost:40589/user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/ae70c25d386a48169774830392cce23b, entries=22, sequenceid=175, filesize=11.1 K 2023-07-15 13:16:06,926 INFO [M:0;jenkins-hbase4:37219] regionserver.HRegion(2948): Finished flush of dataSize ~76.22 KB/78049, heapSize ~90.65 KB/92824, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 25ms, sequenceid=175, compaction requested=false 2023-07-15 13:16:06,928 INFO [M:0;jenkins-hbase4:37219] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 13:16:06,928 DEBUG [M:0;jenkins-hbase4:37219] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 13:16:06,931 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/42836033-31cf-1a23-20c7-f9b22b68a221/MasterData/WALs/jenkins-hbase4.apache.org,37219,1689426961765/jenkins-hbase4.apache.org%2C37219%2C1689426961765.1689426962472 not finished, retry = 0 2023-07-15 13:16:07,032 INFO [M:0;jenkins-hbase4:37219] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-15 13:16:07,032 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 13:16:07,033 INFO [M:0;jenkins-hbase4:37219] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37219 2023-07-15 13:16:07,035 DEBUG [M:0;jenkins-hbase4:37219] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,37219,1689426961765 already deleted, retry=false 2023-07-15 13:16:07,229 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:07,229 INFO [M:0;jenkins-hbase4:37219] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37219,1689426961765; zookeeper connection closed. 2023-07-15 13:16:07,229 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): master:37219-0x101692028530000, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:07,329 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:07,329 INFO [RS:1;jenkins-hbase4:42269] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42269,1689426962094; zookeeper connection closed. 2023-07-15 13:16:07,329 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101692028530002, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:07,330 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@59e364a4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@59e364a4 2023-07-15 13:16:07,429 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:07,429 INFO [RS:0;jenkins-hbase4:46785] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46785,1689426961939; zookeeper connection closed. 2023-07-15 13:16:07,430 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:46785-0x101692028530001, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:07,430 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6523f1bc] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6523f1bc 2023-07-15 13:16:07,530 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:07,530 INFO [RS:2;jenkins-hbase4:37449] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37449,1689426962246; zookeeper connection closed. 2023-07-15 13:16:07,530 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:37449-0x101692028530003, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:07,530 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@363454b7] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@363454b7 2023-07-15 13:16:07,630 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:39339-0x10169202853000b, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:07,630 INFO [RS:3;jenkins-hbase4:39339] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39339,1689426964334; zookeeper connection closed. 2023-07-15 13:16:07,630 DEBUG [Listener at localhost/36623-EventThread] zookeeper.ZKWatcher(600): regionserver:39339-0x10169202853000b, quorum=127.0.0.1:62891, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 13:16:07,630 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7d9cf1ce] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7d9cf1ce 2023-07-15 13:16:07,630 INFO [Listener at localhost/36623] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-15 13:16:07,631 WARN [Listener at localhost/36623] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 13:16:07,634 INFO [Listener at localhost/36623] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 13:16:07,726 WARN [BP-130633532-172.31.14.131-1689426961014 heartbeating to localhost/127.0.0.1:40589] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-130633532-172.31.14.131-1689426961014 (Datanode Uuid 93fc05bd-b830-4238-ad1d-c4ba32e91a23) service to localhost/127.0.0.1:40589 2023-07-15 13:16:07,727 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data5/current/BP-130633532-172.31.14.131-1689426961014] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:16:07,727 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data6/current/BP-130633532-172.31.14.131-1689426961014] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:16:07,737 WARN [Listener at localhost/36623] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 13:16:07,740 INFO [Listener at localhost/36623] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 13:16:07,843 WARN [BP-130633532-172.31.14.131-1689426961014 heartbeating to localhost/127.0.0.1:40589] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-15 13:16:07,843 WARN [BP-130633532-172.31.14.131-1689426961014 heartbeating to localhost/127.0.0.1:40589] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-130633532-172.31.14.131-1689426961014 (Datanode Uuid 9a63e9f3-235e-4a02-b774-9a6355d3e7a3) service to localhost/127.0.0.1:40589 2023-07-15 13:16:07,843 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data3/current/BP-130633532-172.31.14.131-1689426961014] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:16:07,844 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data4/current/BP-130633532-172.31.14.131-1689426961014] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:16:07,845 WARN [Listener at localhost/36623] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 13:16:07,847 INFO [Listener at localhost/36623] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 13:16:07,950 WARN [BP-130633532-172.31.14.131-1689426961014 heartbeating to localhost/127.0.0.1:40589] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-15 13:16:07,950 WARN [BP-130633532-172.31.14.131-1689426961014 heartbeating to localhost/127.0.0.1:40589] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-130633532-172.31.14.131-1689426961014 (Datanode Uuid 67dee77f-f2d4-4f4e-a6c3-2404d30254ad) service to localhost/127.0.0.1:40589 2023-07-15 13:16:07,951 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data1/current/BP-130633532-172.31.14.131-1689426961014] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:16:07,951 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ea69f7cc-aa0b-f1ec-74e4-cc3e30d21049/cluster_8ef57f9b-ac49-f830-af7d-f1431b72c6a9/dfs/data/data2/current/BP-130633532-172.31.14.131-1689426961014] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 13:16:07,960 INFO [Listener at localhost/36623] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 13:16:08,074 INFO [Listener at localhost/36623] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-15 13:16:08,099 INFO [Listener at localhost/36623] hbase.HBaseTestingUtility(1293): Minicluster is down